From patchwork Wed Dec 7 20:53:51 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulrich Weigand X-Patchwork-Id: 5534 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id A378B23E0D for ; Wed, 7 Dec 2011 20:54:04 +0000 (UTC) Received: from mail-bw0-f52.google.com (mail-bw0-f52.google.com [209.85.214.52]) by fiordland.canonical.com (Postfix) with ESMTP id 6C695A1857D for ; Wed, 7 Dec 2011 20:54:04 +0000 (UTC) Received: by bke17 with SMTP id 17so1246635bke.11 for ; Wed, 07 Dec 2011 12:54:04 -0800 (PST) Received: by 10.204.22.14 with SMTP id l14mr189320bkb.36.1323291244022; Wed, 07 Dec 2011 12:54:04 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.205.129.2 with SMTP id hg2cs80004bkc; Wed, 7 Dec 2011 12:54:03 -0800 (PST) Received: by 10.216.135.133 with SMTP id u5mr1047974wei.62.1323291242359; Wed, 07 Dec 2011 12:54:02 -0800 (PST) Received: from e06smtp18.uk.ibm.com (e06smtp18.uk.ibm.com. [195.75.94.114]) by mx.google.com with ESMTPS id l27si1622929weq.99.2011.12.07.12.54.02 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 07 Dec 2011 12:54:02 -0800 (PST) Received-SPF: pass (google.com: domain of uweigand@de.ibm.com designates 195.75.94.114 as permitted sender) client-ip=195.75.94.114; Authentication-Results: mx.google.com; spf=pass (google.com: domain of uweigand@de.ibm.com designates 195.75.94.114 as permitted sender) smtp.mail=uweigand@de.ibm.com Received: from /spool/local by e06smtp18.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 7 Dec 2011 20:54:01 -0000 Received: from d06nrmr1507.portsmouth.uk.ibm.com ([9.149.38.233]) by e06smtp18.uk.ibm.com ([192.168.101.148]) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 7 Dec 2011 20:53:53 -0000 Received: from d06av02.portsmouth.uk.ibm.com (d06av02.portsmouth.uk.ibm.com [9.149.37.228]) by d06nrmr1507.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id pB7Krqlc2371836 for ; Wed, 7 Dec 2011 20:53:52 GMT Received: from d06av02.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id pB7KrqJ4024238 for ; Wed, 7 Dec 2011 13:53:52 -0700 Received: from tuxmaker.boeblingen.de.ibm.com (tuxmaker.boeblingen.de.ibm.com [9.152.85.9]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with SMTP id pB7KrpfQ024229; Wed, 7 Dec 2011 13:53:51 -0700 Message-Id: <201112072053.pB7KrpfQ024229@d06av02.portsmouth.uk.ibm.com> Received: by tuxmaker.boeblingen.de.ibm.com (sSMTP sendmail emulation); Wed, 07 Dec 2011 21:53:51 +0100 Subject: [commit, arm] Handle atomic sequences To: gdb-patches@sourceware.org Date: Wed, 7 Dec 2011 21:53:51 +0100 (CET) From: "Ulrich Weigand" Cc: patches@linaro.org X-Mailer: ELM [version 2.5 PL2] MIME-Version: 1.0 x-cbid: 11120720-6892-0000-0000-00000067C72A Hello, as seen recently on other platforms, the new GCC simulate-thread.exp test case exposes problems single-stepping atomic code sequences on ARM. This patch implements support for stepping across such sequences as a whole, just like already implemented for several other platforms. Tested by running the GDB test suite, as well as running the GCC simulate-thread.exp test case (both -mthumb and -marm). Committed to mainline. Bye, Ulrich ChangeLog: * arm-tdep.h (arm_deal_with_atomic_sequence): Add prototype. * arm-tdep.c (thumb_deal_with_atomic_sequence_raw): New function. (arm_deal_with_atomic_sequence_raw): Likewise. (arm_deal_with_atomic_sequence): Likewise. (arm_software_single_step): Call it. * arm-linux-tdep.c (arm_linux_software_single_step): Likewise. Index: gdb-head/gdb/arm-linux-tdep.c =================================================================== --- gdb-head.orig/gdb/arm-linux-tdep.c 2011-06-15 19:16:57.000000000 +0200 +++ gdb-head/gdb/arm-linux-tdep.c 2011-12-07 16:51:02.000000000 +0100 @@ -843,7 +843,12 @@ { struct gdbarch *gdbarch = get_frame_arch (frame); struct address_space *aspace = get_frame_address_space (frame); - CORE_ADDR next_pc = arm_get_next_pc (frame, get_frame_pc (frame)); + CORE_ADDR next_pc; + + if (arm_deal_with_atomic_sequence (frame)) + return 1; + + next_pc = arm_get_next_pc (frame, get_frame_pc (frame)); /* The Linux kernel offers some user-mode helpers in a high page. We can not read this page (as of 2.6.23), and even if we could then we couldn't Index: gdb-head/gdb/arm-tdep.c =================================================================== --- gdb-head.orig/gdb/arm-tdep.c 2011-12-01 19:43:08.000000000 +0100 +++ gdb-head/gdb/arm-tdep.c 2011-12-07 21:14:14.000000000 +0100 @@ -4879,6 +4879,226 @@ do_cleanups (old_chain); } +/* Checks for an atomic sequence of instructions beginning with a LDREX{,B,H,D} + instruction and ending with a STREX{,B,H,D} instruction. If such a sequence + is found, attempt to step through it. A breakpoint is placed at the end of + the sequence. */ + +static int +thumb_deal_with_atomic_sequence_raw (struct frame_info *frame) +{ + struct gdbarch *gdbarch = get_frame_arch (frame); + struct address_space *aspace = get_frame_address_space (frame); + enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch); + CORE_ADDR pc = get_frame_pc (frame); + CORE_ADDR breaks[2] = {-1, -1}; + CORE_ADDR loc = pc; + unsigned short insn1, insn2; + int insn_count; + int index; + int last_breakpoint = 0; /* Defaults to 0 (no breakpoints placed). */ + const int atomic_sequence_length = 16; /* Instruction sequence length. */ + ULONGEST status, itstate; + + /* We currently do not support atomic sequences within an IT block. */ + status = get_frame_register_unsigned (frame, ARM_PS_REGNUM); + itstate = ((status >> 8) & 0xfc) | ((status >> 25) & 0x3); + if (itstate & 0x0f) + return 0; + + /* Assume all atomic sequences start with a ldrex{,b,h,d} instruction. */ + insn1 = read_memory_unsigned_integer (loc, 2, byte_order_for_code); + loc += 2; + if (thumb_insn_size (insn1) != 4) + return 0; + + insn2 = read_memory_unsigned_integer (loc, 2, byte_order_for_code); + loc += 2; + if (!((insn1 & 0xfff0) == 0xe850 + || ((insn1 & 0xfff0) == 0xe8d0 && (insn2 & 0x00c0) == 0x0040))) + return 0; + + /* Assume that no atomic sequence is longer than "atomic_sequence_length" + instructions. */ + for (insn_count = 0; insn_count < atomic_sequence_length; ++insn_count) + { + insn1 = read_memory_unsigned_integer (loc, 2, byte_order_for_code); + loc += 2; + + if (thumb_insn_size (insn1) != 4) + { + /* Assume that there is at most one conditional branch in the + atomic sequence. If a conditional branch is found, put a + breakpoint in its destination address. */ + if ((insn1 & 0xf000) == 0xd000 && bits (insn1, 8, 11) != 0x0f) + { + if (last_breakpoint > 0) + return 0; /* More than one conditional branch found, + fallback to the standard code. */ + + breaks[1] = loc + 2 + (sbits (insn1, 0, 7) << 1); + last_breakpoint++; + } + + /* We do not support atomic sequences that use any *other* + instructions but conditional branches to change the PC. + Fall back to standard code to avoid losing control of + execution. */ + else if (thumb_instruction_changes_pc (insn1)) + return 0; + } + else + { + insn2 = read_memory_unsigned_integer (loc, 2, byte_order_for_code); + loc += 2; + + /* Assume that there is at most one conditional branch in the + atomic sequence. If a conditional branch is found, put a + breakpoint in its destination address. */ + if ((insn1 & 0xf800) == 0xf000 + && (insn2 & 0xd000) == 0x8000 + && (insn1 & 0x0380) != 0x0380) + { + int sign, j1, j2, imm1, imm2; + unsigned int offset; + + sign = sbits (insn1, 10, 10); + imm1 = bits (insn1, 0, 5); + imm2 = bits (insn2, 0, 10); + j1 = bit (insn2, 13); + j2 = bit (insn2, 11); + + offset = (sign << 20) + (j2 << 19) + (j1 << 18); + offset += (imm1 << 12) + (imm2 << 1); + + if (last_breakpoint > 0) + return 0; /* More than one conditional branch found, + fallback to the standard code. */ + + breaks[1] = loc + offset; + last_breakpoint++; + } + + /* We do not support atomic sequences that use any *other* + instructions but conditional branches to change the PC. + Fall back to standard code to avoid losing control of + execution. */ + else if (thumb2_instruction_changes_pc (insn1, insn2)) + return 0; + + /* If we find a strex{,b,h,d}, we're done. */ + if ((insn1 & 0xfff0) == 0xe840 + || ((insn1 & 0xfff0) == 0xe8c0 && (insn2 & 0x00c0) == 0x0040)) + break; + } + } + + /* If we didn't find the strex{,b,h,d}, we cannot handle the sequence. */ + if (insn_count == atomic_sequence_length) + return 0; + + /* Insert a breakpoint right after the end of the atomic sequence. */ + breaks[0] = loc; + + /* Check for duplicated breakpoints. Check also for a breakpoint + placed (branch instruction's destination) anywhere in sequence. */ + if (last_breakpoint + && (breaks[1] == breaks[0] + || (breaks[1] >= pc && breaks[1] < loc))) + last_breakpoint = 0; + + /* Effectively inserts the breakpoints. */ + for (index = 0; index <= last_breakpoint; index++) + arm_insert_single_step_breakpoint (gdbarch, aspace, + MAKE_THUMB_ADDR (breaks[index])); + + return 1; +} + +static int +arm_deal_with_atomic_sequence_raw (struct frame_info *frame) +{ + struct gdbarch *gdbarch = get_frame_arch (frame); + struct address_space *aspace = get_frame_address_space (frame); + enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch); + CORE_ADDR pc = get_frame_pc (frame); + CORE_ADDR breaks[2] = {-1, -1}; + CORE_ADDR loc = pc; + unsigned int insn; + int insn_count; + int index; + int last_breakpoint = 0; /* Defaults to 0 (no breakpoints placed). */ + const int atomic_sequence_length = 16; /* Instruction sequence length. */ + + /* Assume all atomic sequences start with a ldrex{,b,h,d} instruction. + Note that we do not currently support conditionally executed atomic + instructions. */ + insn = read_memory_unsigned_integer (loc, 4, byte_order_for_code); + loc += 4; + if ((insn & 0xff9000f0) != 0xe1900090) + return 0; + + /* Assume that no atomic sequence is longer than "atomic_sequence_length" + instructions. */ + for (insn_count = 0; insn_count < atomic_sequence_length; ++insn_count) + { + insn = read_memory_unsigned_integer (loc, 4, byte_order_for_code); + loc += 4; + + /* Assume that there is at most one conditional branch in the atomic + sequence. If a conditional branch is found, put a breakpoint in + its destination address. */ + if (bits (insn, 24, 27) == 0xa) + { + if (last_breakpoint > 0) + return 0; /* More than one conditional branch found, fallback + to the standard single-step code. */ + + breaks[1] = BranchDest (loc - 4, insn); + last_breakpoint++; + } + + /* We do not support atomic sequences that use any *other* instructions + but conditional branches to change the PC. Fall back to standard + code to avoid losing control of execution. */ + else if (arm_instruction_changes_pc (insn)) + return 0; + + /* If we find a strex{,b,h,d}, we're done. */ + if ((insn & 0xff9000f0) == 0xe1800090) + break; + } + + /* If we didn't find the strex{,b,h,d}, we cannot handle the sequence. */ + if (insn_count == atomic_sequence_length) + return 0; + + /* Insert a breakpoint right after the end of the atomic sequence. */ + breaks[0] = loc; + + /* Check for duplicated breakpoints. Check also for a breakpoint + placed (branch instruction's destination) anywhere in sequence. */ + if (last_breakpoint + && (breaks[1] == breaks[0] + || (breaks[1] >= pc && breaks[1] < loc))) + last_breakpoint = 0; + + /* Effectively inserts the breakpoints. */ + for (index = 0; index <= last_breakpoint; index++) + arm_insert_single_step_breakpoint (gdbarch, aspace, breaks[index]); + + return 1; +} + +int +arm_deal_with_atomic_sequence (struct frame_info *frame) +{ + if (arm_frame_is_thumb (frame)) + return thumb_deal_with_atomic_sequence_raw (frame); + else + return arm_deal_with_atomic_sequence_raw (frame); +} + /* single_step() is called just before we want to resume the inferior, if we want to single-step it but there is no hardware or kernel single-step support. We find the target of the coming instruction @@ -4889,8 +5109,12 @@ { struct gdbarch *gdbarch = get_frame_arch (frame); struct address_space *aspace = get_frame_address_space (frame); - CORE_ADDR next_pc = arm_get_next_pc (frame, get_frame_pc (frame)); + CORE_ADDR next_pc; + + if (arm_deal_with_atomic_sequence (frame)) + return 1; + next_pc = arm_get_next_pc (frame, get_frame_pc (frame)); arm_insert_single_step_breakpoint (gdbarch, aspace, next_pc); return 1; Index: gdb-head/gdb/arm-tdep.h =================================================================== --- gdb-head.orig/gdb/arm-tdep.h 2011-06-15 19:16:58.000000000 +0200 +++ gdb-head/gdb/arm-tdep.h 2011-12-07 16:51:02.000000000 +0100 @@ -313,6 +313,7 @@ CORE_ADDR arm_get_next_pc (struct frame_info *, CORE_ADDR); void arm_insert_single_step_breakpoint (struct gdbarch *, struct address_space *, CORE_ADDR); +int arm_deal_with_atomic_sequence (struct frame_info *); int arm_software_single_step (struct frame_info *); int arm_frame_is_thumb (struct frame_info *frame);