diff mbox

[for-2.0?,2/2] cpu-exec: Unlock tb_lock if we longjmp out of code generation

Message ID 1396543508-12280-3-git-send-email-peter.maydell@linaro.org
State Superseded
Headers show

Commit Message

Peter Maydell April 3, 2014, 4:45 p.m. UTC
If the guest attempts to execute from unreadable memory, this will
cause us to longjmp back to the main loop from inside the
target frontend decoder. For linux-user mode, this means we will
still hold the tb_ctx.tb_lock, and will deadlock when we try to
start executing code again. Unlock the lock in the return-from-longjmp
code path to avoid this.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
 cpu-exec.c | 7 +++++++
 1 file changed, 7 insertions(+)

Comments

Richard Henderson April 3, 2014, 4:51 p.m. UTC | #1
On 04/03/2014 09:45 AM, Peter Maydell wrote:
> +            if (have_tb_lock) {
> +                spin_unlock(&tcg_ctx.tb_ctx.tb_lock);
> +            }

It ought not matter, since we ought to exit the loop on the next round, but i
have a strong preference for resetting have_tb_lock here.

Otherwise, this is nicely self-contained.


r~
Peter Maydell April 3, 2014, 4:53 p.m. UTC | #2
On 3 April 2014 17:51, Richard Henderson <rth@twiddle.net> wrote:
> On 04/03/2014 09:45 AM, Peter Maydell wrote:
>> +            if (have_tb_lock) {
>> +                spin_unlock(&tcg_ctx.tb_ctx.tb_lock);
>> +            }
>
> It ought not matter, since we ought to exit the loop on the
> next round, but i have a strong preference for resetting
> have_tb_lock here.

Absolutely -- dumb oversight on my part.

thanks
-- PMM
Andrei Warkentin April 3, 2014, 7:38 p.m. UTC | #3
Hiya,

Cool. Definitely more compact and less intrusive, and definitely
should catch more issues than the original page->flags check. The only
possible cost is maintenance and debugging (implicit state and all
that)... so... How about adding a comment around the "if
(have_tb_lock)" to explain how we can get there?

Acked-by: Andrei Warkentin <andrey.warkentin@gmail.com>

2014-04-03 12:53 GMT-04:00 Peter Maydell <peter.maydell@linaro.org>:
> On 3 April 2014 17:51, Richard Henderson <rth@twiddle.net> wrote:
>> On 04/03/2014 09:45 AM, Peter Maydell wrote:
>>> +            if (have_tb_lock) {
>>> +                spin_unlock(&tcg_ctx.tb_ctx.tb_lock);
>>> +            }
>>
>> It ought not matter, since we ought to exit the loop on the
>> next round, but i have a strong preference for resetting
>> have_tb_lock here.
>
> Absolutely -- dumb oversight on my part.
>
> thanks
> -- PMM
Peter Maydell April 3, 2014, 11:24 p.m. UTC | #4
On 3 April 2014 20:38, Andrei E. Warkentin <andrey.warkentin@gmail.com> wrote:
> Hiya,
>
> Cool. Definitely more compact and less intrusive, and definitely
> should catch more issues than the original page->flags check. The only
> possible cost is maintenance and debugging (implicit state and all
> that)... so... How about adding a comment around the "if
> (have_tb_lock)" to explain how we can get there?

I dunno, it seems fairly obvious to me that if you get to this point
with have_tb_lock true then it must be because you longjumped
out of the codegen. (This happens for softmmu as well as linux-user,
it's just softmmu doesn't actually do any tb locking).

thanks
-- PMM
diff mbox

Patch

diff --git a/cpu-exec.c b/cpu-exec.c
index 0914d3c..02168d9 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -227,6 +227,8 @@  int cpu_exec(CPUArchState *env)
     TranslationBlock *tb;
     uint8_t *tc_ptr;
     uintptr_t next_tb;
+    /* This must be volatile so it is not trashed by longjmp() */
+    volatile bool have_tb_lock = false;
 
     if (cpu->halted) {
         if (!cpu_has_work(cpu)) {
@@ -600,6 +602,7 @@  int cpu_exec(CPUArchState *env)
                     cpu_loop_exit(cpu);
                 }
                 spin_lock(&tcg_ctx.tb_ctx.tb_lock);
+                have_tb_lock = true;
                 tb = tb_find_fast(env);
                 /* Note: we do it here to avoid a gcc bug on Mac OS X when
                    doing it in tb_find_slow */
@@ -621,6 +624,7 @@  int cpu_exec(CPUArchState *env)
                     tb_add_jump((TranslationBlock *)(next_tb & ~TB_EXIT_MASK),
                                 next_tb & TB_EXIT_MASK, tb);
                 }
+                have_tb_lock = false;
                 spin_unlock(&tcg_ctx.tb_ctx.tb_lock);
 
                 /* cpu_interrupt might be called while translating the
@@ -692,6 +696,9 @@  int cpu_exec(CPUArchState *env)
 #ifdef TARGET_I386
             x86_cpu = X86_CPU(cpu);
 #endif
+            if (have_tb_lock) {
+                spin_unlock(&tcg_ctx.tb_ctx.tb_lock);
+            }
         }
     } /* for(;;) */