Skip to content
This repository has been archived by the owner on Sep 24, 2020. It is now read-only.

Commit

Permalink
xen: fix xen_qlock_wait()
Browse files Browse the repository at this point in the history
commit d3132b3 upstream.

Commit a856531 ("xen: make xen_qlock_wait() nestable")
introduced a regression for Xen guests running fully virtualized
(HVM or PVH mode). The Xen hypervisor wouldn't return from the poll
hypercall with interrupts disabled in case of an interrupt (for PV
guests it does).

So instead of disabling interrupts in xen_qlock_wait() use a nesting
counter to avoid calling xen_clear_irq_pending() in case
xen_qlock_wait() is nested.

Fixes: a856531 ("xen: make xen_qlock_wait() nestable")
Cc: [email protected]
Reported-by: Sander Eikelenboom <[email protected]>
Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]>
Tested-by: Sander Eikelenboom <[email protected]>
Signed-off-by: Juergen Gross <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
  • Loading branch information
jgross1 authored and gregkh committed Nov 13, 2018
1 parent 8305d98 commit 034680f
Showing 1 changed file with 8 additions and 6 deletions.
14 changes: 8 additions & 6 deletions arch/x86/xen/spinlock.c
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
#include <linux/log2.h>
#include <linux/gfp.h>
#include <linux/slab.h>
#include <linux/atomic.h>

#include <asm/paravirt.h>

Expand All @@ -20,6 +21,7 @@

static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
static DEFINE_PER_CPU(char *, irq_name);
static DEFINE_PER_CPU(atomic_t, xen_qlock_wait_nest);
static bool xen_pvspin = true;

#include <asm/qspinlock.h>
Expand All @@ -40,25 +42,25 @@ static void xen_qlock_kick(int cpu)
*/
static void xen_qlock_wait(u8 *byte, u8 val)
{
unsigned long flags;
int irq = __this_cpu_read(lock_kicker_irq);
atomic_t *nest_cnt = this_cpu_ptr(&xen_qlock_wait_nest);

/* If kicker interrupts not initialized yet, just spin */
if (irq == -1 || in_nmi())
return;

/* Guard against reentry. */
local_irq_save(flags);
/* Detect reentry. */
atomic_inc(nest_cnt);

/* If irq pending already clear it. */
if (xen_test_irq_pending(irq)) {
/* If irq pending already and no nested call clear it. */
if (atomic_read(nest_cnt) == 1 && xen_test_irq_pending(irq)) {
xen_clear_irq_pending(irq);
} else if (READ_ONCE(*byte) == val) {
/* Block until irq becomes pending (or a spurious wakeup) */
xen_poll_irq(irq);
}

local_irq_restore(flags);
atomic_dec(nest_cnt);
}

static irqreturn_t dummy_handler(int irq, void *dev_id)
Expand Down

0 comments on commit 034680f

Please sign in to comment.