Message ID | 20200326193156.4322-1-robert.foley@linaro.org |
---|---|
Headers | show |
Series | per-CPU locks | expand |
21:37 Čet, 26.03.2020. Robert Foley <robert.foley@linaro.org> је написао/ла: > > V7: https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg00786.html > > This is a continuation of the series created by Emilio Cota. > We are picking up this patch set with the goal to apply > any fixes or updates needed to get this accepted. > Thank for this work, Robert. However, I just hope you don't intend to request integrating the series in 5.0. The right timing for such wide-influencing patch is at the begining of dev cycle, not really at the end of (5.0) cycle, IMHO. Yours, Aleksandar > Quoting an earlier patch in the series: > "For context, the goal of this series is to substitute the BQL for the > per-CPU locks in many places, notably the execution loop in cpus.c. > This leads to better scalability for MTTCG, since CPUs don't have > to acquire a contended global lock (the BQL) every time they > stop executing code. > See the last commit for some performance numbers." > > Listed below are the changes for this version of the patch, > aside from the merge related changes. > > Changes for V8: > - Fixed issue where in rr mode we could destroy the BQL twice. > Added new function cpu_mutex_destroy(). > - Removed g_assert(qemu_mutex_iothread_locked()) > from qemu_tcg_rr_all_cpu_threads_idle(). There is an existing > case where we call qemu_tcg_rr_all_cpu_threads_idle() without > the BQL held, so we cannot assert on the lock here. > - Found/fixed bug that had been hit in testing previously during > the last consideration of this patch. > We reproduced the issue hit in the qtest: bios-tables-test. > The issue was introduced by dropping the BQL, and found us > (very rarely) missing the condition variable wakeup in > qemu_tcg_rr_cpu_thread_fn(). > - ppc: convert to cpu_halted > - Converted new code for cpu_halted and cpu_halted_set. > - hw/semihosting: convert to cpu_halted_set > - Added this patch as this code was new and needed converting. > - ppc/translate_init.inc.c > - Translated some new code here to use cpu_has_work_with_iothread_lock. > - ppc/sapr_hcall.c Translated new code to cpu_halted > - i386/hax-all.c - converted new code to cpu_interrupt_request and cpu_halted > - mips/kvm.c - converted new code to cpu_halted > - Some changes were related to files that moved, cpu.c and cpu.h > moved to hw/core/, and some changes needed to be put > there manually during the merge. > > Emilio G. Cota (69): > cpu: convert queued work to a QSIMPLEQ > cpu: rename cpu->work_mutex to cpu->lock > cpu: introduce cpu_mutex_lock/unlock > cpu: make qemu_work_cond per-cpu > cpu: move run_on_cpu to cpus-common > cpu: introduce process_queued_cpu_work_locked > cpu: make per-CPU locks an alias of the BQL in TCG rr mode > tcg-runtime: define helper_cpu_halted_set > ppc: convert to helper_cpu_halted_set > cris: convert to helper_cpu_halted_set > hppa: convert to helper_cpu_halted_set > m68k: convert to helper_cpu_halted_set > alpha: convert to helper_cpu_halted_set > microblaze: convert to helper_cpu_halted_set > cpu: define cpu_halted helpers > tcg-runtime: convert to cpu_halted_set > arm: convert to cpu_halted > ppc: convert to cpu_halted > sh4: convert to cpu_halted > i386: convert to cpu_halted > lm32: convert to cpu_halted > m68k: convert to cpu_halted > mips: convert to cpu_halted > riscv: convert to cpu_halted > s390x: convert to cpu_halted > sparc: convert to cpu_halted > xtensa: convert to cpu_halted > gdbstub: convert to cpu_halted > openrisc: convert to cpu_halted > cpu-exec: convert to cpu_halted > cpu: convert to cpu_halted > cpu: define cpu_interrupt_request helpers > exec: use cpu_reset_interrupt > arm: convert to cpu_interrupt_request > i386: convert to cpu_interrupt_request > i386/kvm: convert to cpu_interrupt_request > i386/hax-all: convert to cpu_interrupt_request > i386/whpx-all: convert to cpu_interrupt_request > i386/hvf: convert to cpu_request_interrupt > ppc: convert to cpu_interrupt_request > sh4: convert to cpu_interrupt_request > cris: convert to cpu_interrupt_request > hppa: convert to cpu_interrupt_request > lm32: convert to cpu_interrupt_request > m68k: convert to cpu_interrupt_request > mips: convert to cpu_interrupt_request > nios: convert to cpu_interrupt_request > s390x: convert to cpu_interrupt_request > alpha: convert to cpu_interrupt_request > moxie: convert to cpu_interrupt_request > sparc: convert to cpu_interrupt_request > openrisc: convert to cpu_interrupt_request > unicore32: convert to cpu_interrupt_request > microblaze: convert to cpu_interrupt_request > accel/tcg: convert to cpu_interrupt_request > cpu: convert to interrupt_request > cpu: call .cpu_has_work with the CPU lock held > cpu: introduce cpu_has_work_with_iothread_lock > ppc: convert to cpu_has_work_with_iothread_lock > mips: convert to cpu_has_work_with_iothread_lock > s390x: convert to cpu_has_work_with_iothread_lock > riscv: convert to cpu_has_work_with_iothread_lock > sparc: convert to cpu_has_work_with_iothread_lock > xtensa: convert to cpu_has_work_with_iothread_lock > cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle > cpu: protect CPU state with cpu->lock instead of the BQL > cpus-common: release BQL earlier in run_on_cpu > cpu: add async_run_on_cpu_no_bql > cputlb: queue async flush jobs without the BQL > > Paolo Bonzini (4): > ppc: use cpu_reset_interrupt > i386: use cpu_reset_interrupt > s390x: use cpu_reset_interrupt > openrisc: use cpu_reset_interrupt > > Robert Foley (1): > hw/semihosting: convert to cpu_halted_set > > accel/tcg/cpu-exec.c | 40 ++- > accel/tcg/cputlb.c | 10 +- > accel/tcg/tcg-all.c | 12 +- > accel/tcg/tcg-runtime.c | 7 + > accel/tcg/tcg-runtime.h | 2 + > accel/tcg/translate-all.c | 2 +- > cpus-common.c | 129 +++++++--- > cpus.c | 438 ++++++++++++++++++++++++++------ > exec.c | 2 +- > gdbstub.c | 4 +- > hw/arm/omap1.c | 4 +- > hw/arm/pxa2xx_gpio.c | 2 +- > hw/arm/pxa2xx_pic.c | 2 +- > hw/core/cpu.c | 29 +-- > hw/core/machine-qmp-cmds.c | 2 +- > hw/intc/s390_flic.c | 4 +- > hw/mips/cps.c | 2 +- > hw/misc/mips_itu.c | 4 +- > hw/openrisc/cputimer.c | 2 +- > hw/ppc/e500.c | 4 +- > hw/ppc/ppc.c | 12 +- > hw/ppc/ppce500_spin.c | 6 +- > hw/ppc/spapr_cpu_core.c | 4 +- > hw/ppc/spapr_hcall.c | 14 +- > hw/ppc/spapr_rtas.c | 8 +- > hw/semihosting/console.c | 4 +- > hw/sparc/leon3.c | 2 +- > hw/sparc/sun4m.c | 8 +- > hw/sparc64/sparc64.c | 8 +- > include/hw/core/cpu.h | 197 ++++++++++++-- > stubs/Makefile.objs | 1 + > stubs/cpu-lock.c | 35 +++ > target/alpha/cpu.c | 8 +- > target/alpha/translate.c | 6 +- > target/arm/arm-powerctl.c | 6 +- > target/arm/cpu.c | 8 +- > target/arm/helper.c | 16 +- > target/arm/machine.c | 2 +- > target/arm/op_helper.c | 2 +- > target/cris/cpu.c | 2 +- > target/cris/helper.c | 4 +- > target/cris/translate.c | 5 +- > target/hppa/cpu.c | 2 +- > target/hppa/translate.c | 3 +- > target/i386/cpu.c | 4 +- > target/i386/cpu.h | 2 +- > target/i386/hax-all.c | 42 +-- > target/i386/helper.c | 8 +- > target/i386/hvf/hvf.c | 12 +- > target/i386/hvf/x86hvf.c | 37 +-- > target/i386/kvm.c | 82 +++--- > target/i386/misc_helper.c | 2 +- > target/i386/seg_helper.c | 13 +- > target/i386/svm_helper.c | 6 +- > target/i386/whpx-all.c | 57 +++-- > target/lm32/cpu.c | 2 +- > target/lm32/op_helper.c | 4 +- > target/m68k/cpu.c | 2 +- > target/m68k/op_helper.c | 2 +- > target/m68k/translate.c | 9 +- > target/microblaze/cpu.c | 2 +- > target/microblaze/translate.c | 4 +- > target/mips/cp0_helper.c | 6 +- > target/mips/cpu.c | 11 +- > target/mips/kvm.c | 4 +- > target/mips/op_helper.c | 2 +- > target/mips/translate.c | 4 +- > target/moxie/cpu.c | 2 +- > target/nios2/cpu.c | 2 +- > target/openrisc/cpu.c | 4 +- > target/openrisc/sys_helper.c | 4 +- > target/ppc/excp_helper.c | 6 +- > target/ppc/helper_regs.h | 2 +- > target/ppc/kvm.c | 6 +- > target/ppc/translate.c | 6 +- > target/ppc/translate_init.inc.c | 41 +-- > target/riscv/cpu.c | 5 +- > target/riscv/op_helper.c | 2 +- > target/s390x/cpu.c | 28 +- > target/s390x/excp_helper.c | 4 +- > target/s390x/kvm.c | 2 +- > target/s390x/sigp.c | 8 +- > target/sh4/cpu.c | 2 +- > target/sh4/helper.c | 2 +- > target/sh4/op_helper.c | 2 +- > target/sparc/cpu.c | 6 +- > target/sparc/helper.c | 2 +- > target/unicore32/cpu.c | 2 +- > target/unicore32/softmmu.c | 2 +- > target/xtensa/cpu.c | 6 +- > target/xtensa/exc_helper.c | 2 +- > target/xtensa/helper.c | 2 +- > 92 files changed, 1067 insertions(+), 464 deletions(-) > create mode 100644 stubs/cpu-lock.c > > -- > 2.17.1 > > <p dir="ltr"></p> <p dir="ltr">21:37 Čet, 26.03.2020. Robert Foley <<a href="mailto:robert.foley@linaro.org">robert.foley@linaro.org</a>> је написао/ла:<br> ><br> > V7: <a href="https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg00786.html">https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg00786.html</a><br> ><br> > This is a continuation of the series created by Emilio Cota.<br> > We are picking up this patch set with the goal to apply <br> > any fixes or updates needed to get this accepted.<br> ></p> <p dir="ltr">Thank for this work, Robert.</p> <p dir="ltr">However, I just hope you don't intend to request integrating the series in 5.0. The right timing for such wide-influencing patch is at the begining of dev cycle, not really at the end of (5.0) cycle, IMHO.</p> <p dir="ltr">Yours,<br> Aleksandar<br></p> <p dir="ltr">> Quoting an earlier patch in the series:<br> > "For context, the goal of this series is to substitute the BQL for the<br> > per-CPU locks in many places, notably the execution loop in cpus.c.<br> > This leads to better scalability for MTTCG, since CPUs don't have<br> > to acquire a contended global lock (the BQL) every time they<br> > stop executing code.<br> > See the last commit for some performance numbers."<br> ><br> > Listed below are the changes for this version of the patch, <br> > aside from the merge related changes.<br> ><br> > Changes for V8:<br> > - Fixed issue where in rr mode we could destroy the BQL twice.<br> > Added new function cpu_mutex_destroy().<br> > - Removed g_assert(qemu_mutex_iothread_locked())<br> > from qemu_tcg_rr_all_cpu_threads_idle(). There is an existing<br> > case where we call qemu_tcg_rr_all_cpu_threads_idle() without<br> > the BQL held, so we cannot assert on the lock here.<br> > - Found/fixed bug that had been hit in testing previously during <br> > the last consideration of this patch.<br> > We reproduced the issue hit in the qtest: bios-tables-test.<br> > The issue was introduced by dropping the BQL, and found us<br> > (very rarely) missing the condition variable wakeup in<br> > qemu_tcg_rr_cpu_thread_fn().<br> > - ppc: convert to cpu_halted<br> > - Converted new code for cpu_halted and cpu_halted_set.<br> > - hw/semihosting: convert to cpu_halted_set<br> > - Added this patch as this code was new and needed converting.<br> > - ppc/translate_init.inc.c<br> > - Translated some new code here to use cpu_has_work_with_iothread_lock.<br> > - ppc/sapr_hcall.c Translated new code to cpu_halted<br> > - i386/hax-all.c - converted new code to cpu_interrupt_request and cpu_halted<br> > - mips/kvm.c - converted new code to cpu_halted<br> > - Some changes were related to files that moved, cpu.c and cpu.h<br> > moved to hw/core/, and some changes needed to be put<br> > there manually during the merge.<br> ><br> > Emilio G. Cota (69):<br> > cpu: convert queued work to a QSIMPLEQ<br> > cpu: rename cpu->work_mutex to cpu->lock<br> > cpu: introduce cpu_mutex_lock/unlock<br> > cpu: make qemu_work_cond per-cpu<br> > cpu: move run_on_cpu to cpus-common<br> > cpu: introduce process_queued_cpu_work_locked<br> > cpu: make per-CPU locks an alias of the BQL in TCG rr mode<br> > tcg-runtime: define helper_cpu_halted_set<br> > ppc: convert to helper_cpu_halted_set<br> > cris: convert to helper_cpu_halted_set<br> > hppa: convert to helper_cpu_halted_set<br> > m68k: convert to helper_cpu_halted_set<br> > alpha: convert to helper_cpu_halted_set<br> > microblaze: convert to helper_cpu_halted_set<br> > cpu: define cpu_halted helpers<br> > tcg-runtime: convert to cpu_halted_set<br> > arm: convert to cpu_halted<br> > ppc: convert to cpu_halted<br> > sh4: convert to cpu_halted<br> > i386: convert to cpu_halted<br> > lm32: convert to cpu_halted<br> > m68k: convert to cpu_halted<br> > mips: convert to cpu_halted<br> > riscv: convert to cpu_halted<br> > s390x: convert to cpu_halted<br> > sparc: convert to cpu_halted<br> > xtensa: convert to cpu_halted<br> > gdbstub: convert to cpu_halted<br> > openrisc: convert to cpu_halted<br> > cpu-exec: convert to cpu_halted<br> > cpu: convert to cpu_halted<br> > cpu: define cpu_interrupt_request helpers<br> > exec: use cpu_reset_interrupt<br> > arm: convert to cpu_interrupt_request<br> > i386: convert to cpu_interrupt_request<br> > i386/kvm: convert to cpu_interrupt_request<br> > i386/hax-all: convert to cpu_interrupt_request<br> > i386/whpx-all: convert to cpu_interrupt_request<br> > i386/hvf: convert to cpu_request_interrupt<br> > ppc: convert to cpu_interrupt_request<br> > sh4: convert to cpu_interrupt_request<br> > cris: convert to cpu_interrupt_request<br> > hppa: convert to cpu_interrupt_request<br> > lm32: convert to cpu_interrupt_request<br> > m68k: convert to cpu_interrupt_request<br> > mips: convert to cpu_interrupt_request<br> > nios: convert to cpu_interrupt_request<br> > s390x: convert to cpu_interrupt_request<br> > alpha: convert to cpu_interrupt_request<br> > moxie: convert to cpu_interrupt_request<br> > sparc: convert to cpu_interrupt_request<br> > openrisc: convert to cpu_interrupt_request<br> > unicore32: convert to cpu_interrupt_request<br> > microblaze: convert to cpu_interrupt_request<br> > accel/tcg: convert to cpu_interrupt_request<br> > cpu: convert to interrupt_request<br> > cpu: call .cpu_has_work with the CPU lock held<br> > cpu: introduce cpu_has_work_with_iothread_lock<br> > ppc: convert to cpu_has_work_with_iothread_lock<br> > mips: convert to cpu_has_work_with_iothread_lock<br> > s390x: convert to cpu_has_work_with_iothread_lock<br> > riscv: convert to cpu_has_work_with_iothread_lock<br> > sparc: convert to cpu_has_work_with_iothread_lock<br> > xtensa: convert to cpu_has_work_with_iothread_lock<br> > cpu: rename all_cpu_threads_idle to qemu_tcg_rr_all_cpu_threads_idle<br> > cpu: protect CPU state with cpu->lock instead of the BQL<br> > cpus-common: release BQL earlier in run_on_cpu<br> > cpu: add async_run_on_cpu_no_bql<br> > cputlb: queue async flush jobs without the BQL<br> ><br> > Paolo Bonzini (4):<br> > ppc: use cpu_reset_interrupt<br> > i386: use cpu_reset_interrupt<br> > s390x: use cpu_reset_interrupt<br> > openrisc: use cpu_reset_interrupt<br> ><br> > Robert Foley (1):<br> > hw/semihosting: convert to cpu_halted_set<br> ><br> > accel/tcg/cpu-exec.c | 40 ++-<br> > accel/tcg/cputlb.c | 10 +-<br> > accel/tcg/tcg-all.c | 12 +-<br> > accel/tcg/tcg-runtime.c | 7 +<br> > accel/tcg/tcg-runtime.h | 2 +<br> > accel/tcg/translate-all.c | 2 +-<br> > cpus-common.c | 129 +++++++---<br> > cpus.c | 438 ++++++++++++++++++++++++++------<br> > exec.c | 2 +-<br> > gdbstub.c | 4 +-<br> > hw/arm/omap1.c | 4 +-<br> > hw/arm/pxa2xx_gpio.c | 2 +-<br> > hw/arm/pxa2xx_pic.c | 2 +-<br> > hw/core/cpu.c | 29 +--<br> > hw/core/machine-qmp-cmds.c | 2 +-<br> > hw/intc/s390_flic.c | 4 +-<br> > hw/mips/cps.c | 2 +-<br> > hw/misc/mips_itu.c | 4 +-<br> > hw/openrisc/cputimer.c | 2 +-<br> > hw/ppc/e500.c | 4 +-<br> > hw/ppc/ppc.c | 12 +-<br> > hw/ppc/ppce500_spin.c | 6 +-<br> > hw/ppc/spapr_cpu_core.c | 4 +-<br> > hw/ppc/spapr_hcall.c | 14 +-<br> > hw/ppc/spapr_rtas.c | 8 +-<br> > hw/semihosting/console.c | 4 +-<br> > hw/sparc/leon3.c | 2 +-<br> > hw/sparc/sun4m.c | 8 +-<br> > hw/sparc64/sparc64.c | 8 +-<br> > include/hw/core/cpu.h | 197 ++++++++++++--<br> > stubs/Makefile.objs | 1 +<br> > stubs/cpu-lock.c | 35 +++<br> > target/alpha/cpu.c | 8 +-<br> > target/alpha/translate.c | 6 +-<br> > target/arm/arm-powerctl.c | 6 +-<br> > target/arm/cpu.c | 8 +-<br> > target/arm/helper.c | 16 +-<br> > target/arm/machine.c | 2 +-<br> > target/arm/op_helper.c | 2 +-<br> > target/cris/cpu.c | 2 +-<br> > target/cris/helper.c | 4 +-<br> > target/cris/translate.c | 5 +-<br> > target/hppa/cpu.c | 2 +-<br> > target/hppa/translate.c | 3 +-<br> > target/i386/cpu.c | 4 +-<br> > target/i386/cpu.h | 2 +-<br> > target/i386/hax-all.c | 42 +--<br> > target/i386/helper.c | 8 +-<br> > target/i386/hvf/hvf.c | 12 +-<br> > target/i386/hvf/x86hvf.c | 37 +--<br> > target/i386/kvm.c | 82 +++---<br> > target/i386/misc_helper.c | 2 +-<br> > target/i386/seg_helper.c | 13 +-<br> > target/i386/svm_helper.c | 6 +-<br> > target/i386/whpx-all.c | 57 +++--<br> > target/lm32/cpu.c | 2 +-<br> > target/lm32/op_helper.c | 4 +-<br> > target/m68k/cpu.c | 2 +-<br> > target/m68k/op_helper.c | 2 +-<br> > target/m68k/translate.c | 9 +-<br> > target/microblaze/cpu.c | 2 +-<br> > target/microblaze/translate.c | 4 +-<br> > target/mips/cp0_helper.c | 6 +-<br> > target/mips/cpu.c | 11 +-<br> > target/mips/kvm.c | 4 +-<br> > target/mips/op_helper.c | 2 +-<br> > target/mips/translate.c | 4 +-<br> > target/moxie/cpu.c | 2 +-<br> > target/nios2/cpu.c | 2 +-<br> > target/openrisc/cpu.c | 4 +-<br> > target/openrisc/sys_helper.c | 4 +-<br> > target/ppc/excp_helper.c | 6 +-<br> > target/ppc/helper_regs.h | 2 +-<br> > target/ppc/kvm.c | 6 +-<br> > target/ppc/translate.c | 6 +-<br> > target/ppc/translate_init.inc.c | 41 +--<br> > target/riscv/cpu.c | 5 +-<br> > target/riscv/op_helper.c | 2 +-<br> > target/s390x/cpu.c | 28 +-<br> > target/s390x/excp_helper.c | 4 +-<br> > target/s390x/kvm.c | 2 +-<br> > target/s390x/sigp.c | 8 +-<br> > target/sh4/cpu.c | 2 +-<br> > target/sh4/helper.c | 2 +-<br> > target/sh4/op_helper.c | 2 +-<br> > target/sparc/cpu.c | 6 +-<br> > target/sparc/helper.c | 2 +-<br> > target/unicore32/cpu.c | 2 +-<br> > target/unicore32/softmmu.c | 2 +-<br> > target/xtensa/cpu.c | 6 +-<br> > target/xtensa/exc_helper.c | 2 +-<br> > target/xtensa/helper.c | 2 +-<br> > 92 files changed, 1067 insertions(+), 464 deletions(-)<br> > create mode 100644 stubs/cpu-lock.c<br> ><br> > -- <br> > 2.17.1<br> ><br> ><br> </p>
(Apologies if I missed some Cc's; I was not Cc'ed in patch 0 so I'm blindly crafting a reply.) On Thu, Mar 26, 2020 at 15:30:43 -0400, Robert Foley wrote: > This is a continuation of the series created by Emilio Cota. > We are picking up this patch set with the goal to apply > any fixes or updates needed to get this accepted. Thanks for picking this up! > Listed below are the changes for this version of the patch, > aside from the merge related changes. > > Changes for V8: > - Fixed issue where in rr mode we could destroy the BQL twice. I remember doing little to no testing in record-replay mode, so there should be more bugs hiding in there :-) > - Found/fixed bug that had been hit in testing previously during > the last consideration of this patch. > We reproduced the issue hit in the qtest: bios-tables-test. > The issue was introduced by dropping the BQL, and found us > (very rarely) missing the condition variable wakeup in > qemu_tcg_rr_cpu_thread_fn(). Aah, this one: https://patchwork.kernel.org/patch/10838149/#22516931 How did you identify the problem? Was it code inspection or using a tool like rr? I remember this being hard to reproduce reliably. On a related note, I've done some work to get QEMU-system to work under thread sanitizer, since tsan now supports our longjmp-based coroutines (hurrah!). My idea was to integrate tsan in QEMU (i.e. bring tsan warnings to 0) before (re)trying to merge the per-CPU lock patchset; this would minimize the potential for regressions, which from my personal viewpoint seems like a reasonable thing to do especially now that I have little time to work on QEMU. If there's interest in doing the tsan work first, then I'd be happy to send to the list as soon as this weekend the changes that I have so far [1]. Thanks, Emilio [1] WIP branch: https://github.com/cota/qemu/commits/tsan
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> writes: > 21:37 Čet, 26.03.2020. Robert Foley <robert.foley@linaro.org> је написао/ла: >> >> V7: https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg00786.html >> >> This is a continuation of the series created by Emilio Cota. >> We are picking up this patch set with the goal to apply >> any fixes or updates needed to get this accepted. >> > > Thank for this work, Robert. > > However, I just hope you don't intend to request integrating the series in > 5.0. The right timing for such wide-influencing patch is at the begining of > dev cycle, not really at the end of (5.0) cycle, IMHO. It's not marked for 5.0 - I don't think all patch activity on the list has to stop during softfreeze. I don't think there is any danger of it getting merged and early visibility has already generated useful feedback and discussion. -- Alex Bennée
петак, 27. март 2020., Alex Bennée <alex.bennee@linaro.org> је написао/ла: > > Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> writes: > > > 21:37 Čet, 26.03.2020. Robert Foley <robert.foley@linaro.org> је > написао/ла: > >> > >> V7: https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg00786.html > >> > >> This is a continuation of the series created by Emilio Cota. > >> We are picking up this patch set with the goal to apply > >> any fixes or updates needed to get this accepted. > >> > > > > Thank for this work, Robert. > > > > However, I just hope you don't intend to request integrating the series > in > > 5.0. The right timing for such wide-influencing patch is at the begining > of > > dev cycle, not really at the end of (5.0) cycle, IMHO. > > It's not marked for 5.0 - I don't think all patch activity on the list > has to stop during softfreeze. I don't think there is any danger of it > getting merged and early visibility has already generated useful > feedback and discussion. > OK, nobody ever said we can examine, discuss and test the series, but I remain thinking that this series arrives too late for considering for 5.0. Aleksandar > -- > Alex Bennée > <br><br>петак, 27. март 2020., Alex Bennée <<a href="mailto:alex.bennee@linaro.org">alex.bennee@linaro.org</a>> је написао/ла:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br> Aleksandar Markovic <<a href="mailto:aleksandar.qemu.devel@gmail.com">aleksandar.qemu.devel@gmail.<wbr>com</a>> writes:<br> <br> > 21:37 Čet, 26.03.2020. Robert Foley <<a href="mailto:robert.foley@linaro.org">robert.foley@linaro.org</a>> је написао/ла:<br> >><br> >> V7: <a href="https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg00786.html" target="_blank">https://lists.gnu.org/archive/<wbr>html/qemu-devel/2019-03/<wbr>msg00786.html</a><br> >><br> >> This is a continuation of the series created by Emilio Cota.<br> >> We are picking up this patch set with the goal to apply<br> >> any fixes or updates needed to get this accepted.<br> >><br> ><br> > Thank for this work, Robert.<br> ><br> > However, I just hope you don't intend to request integrating the series in<br> > 5.0. The right timing for such wide-influencing patch is at the begining of<br> > dev cycle, not really at the end of (5.0) cycle, IMHO.<br> <br> It's not marked for 5.0 - I don't think all patch activity on the list<br> has to stop during softfreeze. I don't think there is any danger of it<br> getting merged and early visibility has already generated useful<br> feedback and discussion.<br></blockquote><div><br></div><div>OK, nobody ever said we can examine, discuss and test the series, but I remain thinking that this series arrives too late for considering for 5.0.</div><div><br></div><div>Aleksandar</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> -- <br> Alex Bennée<br> </blockquote>
11:50 Pet, 27.03.2020. Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> је написао/ла: > > > > петак, 27. март 2020., Alex Bennée <alex.bennee@linaro.org> је написао/ла: >> >> >> Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> writes: >> >> > 21:37 Čet, 26.03.2020. Robert Foley <robert.foley@linaro.org> је написао/ла: >> >> >> >> V7: https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg00786.html >> >> >> >> This is a continuation of the series created by Emilio Cota. >> >> We are picking up this patch set with the goal to apply >> >> any fixes or updates needed to get this accepted. >> >> >> > >> > Thank for this work, Robert. >> > >> > However, I just hope you don't intend to request integrating the series in >> > 5.0. The right timing for such wide-influencing patch is at the begining of >> > dev cycle, not really at the end of (5.0) cycle, IMHO. >> >> It's not marked for 5.0 - I don't think all patch activity on the list >> has to stop during softfreeze. I don't think there is any danger of it >> getting merged and early visibility has already generated useful >> feedback and discussion. > > > OK, nobody ever said we can Obviously, I meant here "cannot", not "can". Everbody is allowed to do any experimentation and evaluation of the series at any time - of course. :) > examine, discuss and test the series, but I remain thinking that this series arrives too late for considering for 5.0. > > Aleksandar > > >> >> -- >> Alex Bennée <p dir="ltr"></p> <p dir="ltr">11:50 Pet, 27.03.2020. Aleksandar Markovic <<a href="mailto:aleksandar.qemu.devel@gmail.com">aleksandar.qemu.devel@gmail.com</a>> је написао/ла:<br> ><br> ><br> ><br> > петак, 27. март 2020., Alex Bennée <<a href="mailto:alex.bennee@linaro.org">alex.bennee@linaro.org</a>> је написао/ла:<br> >><br> >><br> >> Aleksandar Markovic <<a href="mailto:aleksandar.qemu.devel@gmail.com">aleksandar.qemu.devel@gmail.com</a>> writes:<br> >><br> >> > 21:37 Čet, 26.03.2020. Robert Foley <<a href="mailto:robert.foley@linaro.org">robert.foley@linaro.org</a>> је написао/ла:<br> >> >><br> >> >> V7: <a href="https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg00786.html">https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg00786.html</a><br> >> >><br> >> >> This is a continuation of the series created by Emilio Cota.<br> >> >> We are picking up this patch set with the goal to apply<br> >> >> any fixes or updates needed to get this accepted.<br> >> >><br> >> ><br> >> > Thank for this work, Robert.<br> >> ><br> >> > However, I just hope you don't intend to request integrating the series in<br> >> > 5.0. The right timing for such wide-influencing patch is at the begining of<br> >> > dev cycle, not really at the end of (5.0) cycle, IMHO.<br> >><br> >> It's not marked for 5.0 - I don't think all patch activity on the list<br> >> has to stop during softfreeze. I don't think there is any danger of it<br> >> getting merged and early visibility has already generated useful<br> >> feedback and discussion.<br> ><br> ><br> > OK, nobody ever said we can</p> <p dir="ltr">Obviously, I meant here "cannot", not "can". Everbody is allowed to do any experimentation and evaluation of the series at any time - of course. :)</p> <p dir="ltr">> examine, discuss and test the series, but I remain thinking that this series arrives too late for considering for 5.0.<br> ><br> > Aleksandar<br> ><br> > <br> >><br> >> -- <br> >> Alex Bennée<br> </p>
On 3/27/20 6:14 AM, Emilio G. Cota wrote: > (Apologies if I missed some Cc's; I was not Cc'ed in patch 0 > so I'm blindly crafting a reply.) > > On Thu, Mar 26, 2020 at 15:30:43 -0400, Robert Foley wrote: >> This is a continuation of the series created by Emilio Cota. >> We are picking up this patch set with the goal to apply >> any fixes or updates needed to get this accepted. > > Thanks for picking this up! > >> Listed below are the changes for this version of the patch, >> aside from the merge related changes. >> >> Changes for V8: >> - Fixed issue where in rr mode we could destroy the BQL twice. > > I remember doing little to no testing in record-replay mode, so > there should be more bugs hiding in there :-) > >> - Found/fixed bug that had been hit in testing previously during >> the last consideration of this patch. >> We reproduced the issue hit in the qtest: bios-tables-test. >> The issue was introduced by dropping the BQL, and found us >> (very rarely) missing the condition variable wakeup in >> qemu_tcg_rr_cpu_thread_fn(). > > Aah, this one: > https://patchwork.kernel.org/patch/10838149/#22516931 > How did you identify the problem? Was it code inspection or using a tool > like rr? I remember this being hard to reproduce reliably. > > On a related note, I've done some work to get QEMU-system to work > under thread sanitizer, since tsan now supports our longjmp-based > coroutines (hurrah!). My idea was to integrate tsan in QEMU (i.e. > bring tsan warnings to 0) before (re)trying to merge the > per-CPU lock patchset; this would minimize the potential for > regressions, which from my personal viewpoint seems like a reasonable > thing to do especially now that I have little time to work on QEMU. > > If there's interest in doing the tsan work first, then I'd be > happy to send to the list as soon as this weekend the changes that > I have so far [1]. I'm pretty sure Marc-André is interested (and also Stefan maybe), so Cc'ing them. > > Thanks, > Emilio > > [1] WIP branch: https://github.com/cota/qemu/commits/tsan >
On Fri, 27 Mar 2020 at 06:24, Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> wrote: > >> >> > >> >> This is a continuation of the series created by Emilio Cota. > >> >> We are picking up this patch set with the goal to apply > >> >> any fixes or updates needed to get this accepted. > >> >> > >> > > >> > Thank for this work, Robert. > >> > > >> > However, I just hope you don't intend to request integrating the series in > >> > 5.0. The right timing for such wide-influencing patch is at the begining of > >> > dev cycle, not really at the end of (5.0) cycle, IMHO. > >> > >> It's not marked for 5.0 - I don't think all patch activity on the list > >> has to stop during softfreeze. I don't think there is any danger of it > >> getting merged and early visibility has already generated useful > >> feedback and discussion. > > > > > > OK, nobody ever said we can > > Obviously, I meant here "cannot", not "can". Everbody is allowed to do any experimentation and evaluation of the series at any time - of course. :) We do not have any particular release in mind, but we are not intending this release for sure. It is a good point though, that we probably should have mentioned this in our cover letter, given the timing in the current release cycle. We'll remember that in the future. :) We're just looking to get this out there now to get some feedback and hopefully advance the series forward to the point that it can be included in a release in the future. Thanks & Regards, -Rob > > > examine, discuss and test the series, but I remain thinking that this series arrives too late for considering for 5.0. > > > > Aleksandar > > > > > >> > >> -- > >> Alex Bennée
Emilio G. Cota <cota@braap.org> writes: > (Apologies if I missed some Cc's; I was not Cc'ed in patch 0 > so I'm blindly crafting a reply.) > <snip> >> Changes for V8: >> - Fixed issue where in rr mode we could destroy the BQL twice. > > I remember doing little to no testing in record-replay mode, so > there should be more bugs hiding in there :-) Well we have two (very simple) rr tests in check-tcg now - so there is that ;-) > On a related note, I've done some work to get QEMU-system to work > under thread sanitizer, since tsan now supports our longjmp-based > coroutines (hurrah!). When did that go in? Will I need a new toolchain and -ltsan to make it work? > My idea was to integrate tsan in QEMU (i.e. > bring tsan warnings to 0) before (re)trying to merge the > per-CPU lock patchset; this would minimize the potential for > regressions, which from my personal viewpoint seems like a reasonable > thing to do especially now that I have little time to work on QEMU. > > If there's interest in doing the tsan work first, then I'd be > happy to send to the list as soon as this weekend the changes that > I have so far [1]. Please do - I've been cleaning up some of the other things the sanitizer has found and it certainly won't hurt to get thread sanitizer working again. -- Alex Bennée
On Fri, 27 Mar 2020 at 01:14, Emilio G. Cota <cota@braap.org> wrote: > > (Apologies if I missed some Cc's; I was not Cc'ed in patch 0 > so I'm blindly crafting a reply.) Sorry I forgot to including you in patch 0, my bad. Will be sure to include you in the future. > On Thu, Mar 26, 2020 at 15:30:43 -0400, Robert Foley wrote: > > This is a continuation of the series created by Emilio Cota. > > We are picking up this patch set with the goal to apply > > any fixes or updates needed to get this accepted. > > Thanks for picking this up! > > > Listed below are the changes for this version of the patch, > > aside from the merge related changes. > > > > Changes for V8: > > - Fixed issue where in rr mode we could destroy the BQL twice. > > I remember doing little to no testing in record-replay mode, so > there should be more bugs hiding in there :-) Thanks for the tip! We will give record-replay some extra testing to hopefully shake some things out. :) > > > - Found/fixed bug that had been hit in testing previously during > > the last consideration of this patch. > > We reproduced the issue hit in the qtest: bios-tables-test. > > The issue was introduced by dropping the BQL, and found us > > (very rarely) missing the condition variable wakeup in > > qemu_tcg_rr_cpu_thread_fn(). > > Aah, this one: > https://patchwork.kernel.org/patch/10838149/#22516931 > How did you identify the problem? Was it code inspection or using a tool > like rr? I remember this being hard to reproduce reliably. Same here, it was hard to reproduce. I did try to use rr on some shorter runs but no luck there. We ran it overnight on one of our ARM servers and it eventually reproduced after about 12 hours in a loop across all the bios-table-test(s) (no rr). Never got it to reproduce on an x86 server. It was fairly consistent too on the same ARM host, it always reproduced within 8-12 hrs or so, and we were able to reproduce it several times. Thanks & Regards, -Rob
On Fri, Mar 27, 2020 at 11:59:37AM +0100, Philippe Mathieu-Daudé wrote: > On 3/27/20 6:14 AM, Emilio G. Cota wrote: > > (Apologies if I missed some Cc's; I was not Cc'ed in patch 0 > > so I'm blindly crafting a reply.) > > > > On Thu, Mar 26, 2020 at 15:30:43 -0400, Robert Foley wrote: > > > This is a continuation of the series created by Emilio Cota. > > > We are picking up this patch set with the goal to apply > > > any fixes or updates needed to get this accepted. > > > > Thanks for picking this up! > > > > > Listed below are the changes for this version of the patch, > > > aside from the merge related changes. > > > > > > Changes for V8: > > > - Fixed issue where in rr mode we could destroy the BQL twice. > > > > I remember doing little to no testing in record-replay mode, so > > there should be more bugs hiding in there :-) > > > > > - Found/fixed bug that had been hit in testing previously during > > > the last consideration of this patch. > > > We reproduced the issue hit in the qtest: bios-tables-test. > > > The issue was introduced by dropping the BQL, and found us > > > (very rarely) missing the condition variable wakeup in > > > qemu_tcg_rr_cpu_thread_fn(). > > > > Aah, this one: > > https://patchwork.kernel.org/patch/10838149/#22516931 > > How did you identify the problem? Was it code inspection or using a tool > > like rr? I remember this being hard to reproduce reliably. > > > > On a related note, I've done some work to get QEMU-system to work > > under thread sanitizer, since tsan now supports our longjmp-based > > coroutines (hurrah!). My idea was to integrate tsan in QEMU (i.e. > > bring tsan warnings to 0) before (re)trying to merge the > > per-CPU lock patchset; this would minimize the potential for > > regressions, which from my personal viewpoint seems like a reasonable > > thing to do especially now that I have little time to work on QEMU. > > > > If there's interest in doing the tsan work first, then I'd be > > happy to send to the list as soon as this weekend the changes that > > I have so far [1]. > > I'm pretty sure Marc-André is interested (and also Stefan maybe), so Cc'ing > them. Yes, please! tsan would be another good tool to have. Stefan
Robert Foley <robert.foley@linaro.org> writes: > V7: https://lists.gnu.org/archive/html/qemu-devel/2019-03/msg00786.html > > This is a continuation of the series created by Emilio Cota. > We are picking up this patch set with the goal to apply > any fixes or updates needed to get this accepted. > > Quoting an earlier patch in the series: > "For context, the goal of this series is to substitute the BQL for the > per-CPU locks in many places, notably the execution loop in cpus.c. > This leads to better scalability for MTTCG, since CPUs don't have > to acquire a contended global lock (the BQL) every time they > stop executing code. > See the last commit for some performance numbers." Aside from some minor comments I think this series is pretty good to go and would favour an early merging so we get plenty of time to shake out any bugs. I've been hammering this with my looped build test and everything seems pretty stable. So for me have a: Tested-by: Alex Bennée <alex.bennee@linaro.org> for the series. -- Alex Bennée