Message ID | 20170921110625.9500-1-alex.bennee@linaro.org |
---|---|
State | Superseded |
Headers | show |
Series | accel/tcg/cputlb: avoid recursive BQL (fixes #1706296) | expand |
On 21 September 2017 at 12:06, Alex Bennée <alex.bennee@linaro.org> wrote: > The mmio path (see exec.c:prepare_mmio_access) already protects itself > against recursive locking and it makes sense to do the same for > io_readx/writex. Otherwise any helper running in the BQL context will > assert when it attempts to write to device memory as in the case of > the bug report. > > Signed-off-by: Alex Bennée <alex.bennee@linaro.org> > CC: Richard Jones <rjones@redhat.com> > CC: Paolo Bonzini <bonzini@gnu.org> Seems worth: Cc: qemu-stable@nongnu.org > --- > accel/tcg/cputlb.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c > index e72415a882..bcbcc4db6c 100644 > --- a/accel/tcg/cputlb.c > +++ b/accel/tcg/cputlb.c > @@ -765,7 +765,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, > > cpu->mem_io_vaddr = addr; > > - if (mr->global_locking) { > + if (mr->global_locking && !qemu_mutex_iothread_locked()) { > qemu_mutex_lock_iothread(); > locked = true; > } > @@ -800,7 +800,7 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, > cpu->mem_io_vaddr = addr; > cpu->mem_io_pc = retaddr; > > - if (mr->global_locking) { > + if (mr->global_locking && !qemu_mutex_iothread_locked()) { > qemu_mutex_lock_iothread(); > locked = true; > } > -- > 2.14.1 Seems to fix the similar problem that I ran into with my work-in-progress v8M blxns/secure-return code. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> thanks -- PMM
On 09/21/2017 04:06 AM, Alex Bennée wrote: > The mmio path (see exec.c:prepare_mmio_access) already protects itself > against recursive locking and it makes sense to do the same for > io_readx/writex. Otherwise any helper running in the BQL context will > assert when it attempts to write to device memory as in the case of > the bug report. > > Signed-off-by: Alex Bennée <alex.bennee@linaro.org> > CC: Richard Jones <rjones@redhat.com> > CC: Paolo Bonzini <bonzini@gnu.org> > --- > accel/tcg/cputlb.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> I guess I'll queue this through tcg-next? r~
Quoting Alex Bennée (2017-09-21 06:06:25) > The mmio path (see exec.c:prepare_mmio_access) already protects itself > against recursive locking and it makes sense to do the same for > io_readx/writex. Otherwise any helper running in the BQL context will > assert when it attempts to write to device memory as in the case of > the bug report. > > Signed-off-by: Alex Bennée <alex.bennee@linaro.org> > CC: Richard Jones <rjones@redhat.com> > CC: Paolo Bonzini <bonzini@gnu.org> FYI: this patch has been tagged for stable 2.10.1, but is not yet upstream. Patch freeze for 2.10.1 is September 27th. > --- > accel/tcg/cputlb.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c > index e72415a882..bcbcc4db6c 100644 > --- a/accel/tcg/cputlb.c > +++ b/accel/tcg/cputlb.c > @@ -765,7 +765,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, > > cpu->mem_io_vaddr = addr; > > - if (mr->global_locking) { > + if (mr->global_locking && !qemu_mutex_iothread_locked()) { > qemu_mutex_lock_iothread(); > locked = true; > } > @@ -800,7 +800,7 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, > cpu->mem_io_vaddr = addr; > cpu->mem_io_pc = retaddr; > > - if (mr->global_locking) { > + if (mr->global_locking && !qemu_mutex_iothread_locked()) { > qemu_mutex_lock_iothread(); > locked = true; > } > -- > 2.14.1 > >
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index e72415a882..bcbcc4db6c 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -765,7 +765,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry, cpu->mem_io_vaddr = addr; - if (mr->global_locking) { + if (mr->global_locking && !qemu_mutex_iothread_locked()) { qemu_mutex_lock_iothread(); locked = true; } @@ -800,7 +800,7 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry, cpu->mem_io_vaddr = addr; cpu->mem_io_pc = retaddr; - if (mr->global_locking) { + if (mr->global_locking && !qemu_mutex_iothread_locked()) { qemu_mutex_lock_iothread(); locked = true; }
The mmio path (see exec.c:prepare_mmio_access) already protects itself against recursive locking and it makes sense to do the same for io_readx/writex. Otherwise any helper running in the BQL context will assert when it attempts to write to device memory as in the case of the bug report. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> CC: Richard Jones <rjones@redhat.com> CC: Paolo Bonzini <bonzini@gnu.org> --- accel/tcg/cputlb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- 2.14.1