From patchwork Thu Mar 27 01:51:26 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kyle McMartin X-Patchwork-Id: 27172 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f199.google.com (mail-ob0-f199.google.com [209.85.214.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8619920369 for ; Thu, 27 Mar 2014 01:51:45 +0000 (UTC) Received: by mail-ob0-f199.google.com with SMTP id wo20sf10294081obc.2 for ; Wed, 26 Mar 2014 18:51:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mailing-list:precedence:list-id :list-unsubscribe:list-subscribe:list-archive:list-post:list-help :sender:delivered-to:date:from:to:subject:message-id:mime-version :user-agent:x-original-sender:x-original-authentication-results :content-type:content-disposition; bh=sWLBieAwPuJd5ODm7WpCBBZQy4KtEeOz7/ZtqZFAMZQ=; b=OSJvK2Gh84xLzqRL8AeXh9ZAkuS4rMWB0l/oz5MrP8yD4paRF2Kfut6+72AniWOb6t 43zgNF3fjH4TlLowXaBc+DATpeWklEshqGCkW47YnEYcZBQtFFVAtGnXsbsN3yBoukJs CvddBZHNBR4RRGXgf5s2/6efRKlFuyLkvQEkvGWe79nvUyFfB8aemXCKveVJYfPbNMO9 7HgJKXhsRO15P+Y+GQ5Q6teFSz7r2ZNFp7hVa/OeU4NtCOGJHAsBo6mIy4n6mNp7OiEq MJwcl4I5hxzNXzBoxT/NyUnNaBp4y7d8ISPctxmLidBqjQu8h+8yDy0r4/8Zr9eXnfzi C3cw== X-Gm-Message-State: ALoCoQkBYBS8SPFQqw4aa7gLsjAkMPjFA8TYT2BEcvy2g3WOJm3Hm0W8ksA0SnehIKI2NYI1EK8s X-Received: by 10.50.254.1 with SMTP id ae1mr5969561igd.6.1395885104850; Wed, 26 Mar 2014 18:51:44 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.80.76 with SMTP id b70ls843750qgd.97.gmail; Wed, 26 Mar 2014 18:51:44 -0700 (PDT) X-Received: by 10.221.28.202 with SMTP id rv10mr9580699vcb.10.1395885104688; Wed, 26 Mar 2014 18:51:44 -0700 (PDT) Received: from mail-ve0-x230.google.com (mail-ve0-x230.google.com [2607:f8b0:400c:c01::230]) by mx.google.com with ESMTPS id nc1si143014vec.24.2014.03.26.18.51.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 26 Mar 2014 18:51:44 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c01::230 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c01::230; Received: by mail-ve0-f176.google.com with SMTP id db11so175428veb.7 for ; Wed, 26 Mar 2014 18:51:44 -0700 (PDT) X-Received: by 10.221.20.199 with SMTP id qp7mr4254526vcb.24.1395885104443; Wed, 26 Mar 2014 18:51:44 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.78.9 with SMTP id i9csp90877vck; Wed, 26 Mar 2014 18:51:43 -0700 (PDT) X-Received: by 10.68.196.168 with SMTP id in8mr27501551pbc.132.1395885103425; Wed, 26 Mar 2014 18:51:43 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id b5si291280pbq.18.2014.03.26.18.51.42 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 26 Mar 2014 18:51:43 -0700 (PDT) Received-SPF: pass (google.com: domain of gdb-patches-return-111426-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 24053 invoked by alias); 27 Mar 2014 01:51:33 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Subscribe: List-Archive: List-Post: , List-Help: , Sender: gdb-patches-owner@sourceware.org Delivered-To: mailing list gdb-patches@sourceware.org Received: (qmail 24016 invoked by uid 89); 27 Mar 2014 01:51:31 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.3 required=5.0 tests=AWL, BAYES_00, RP_MATCHES_RCVD, SPF_HELO_PASS, SPF_PASS, UNSUBSCRIBE_BODY autolearn=no version=3.3.2 X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 27 Mar 2014 01:51:29 +0000 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s2R1pS9K023663 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Wed, 26 Mar 2014 21:51:28 -0400 Received: from redacted.bos.redhat.com ([10.18.17.143]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s2R1pQ92027229 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO) for ; Wed, 26 Mar 2014 21:51:27 -0400 Date: Wed, 26 Mar 2014 21:51:26 -0400 From: Kyle McMartin To: gdb-patches@sourceware.org Subject: [PATCHv2] aarch64: detect atomic sequences like other ll/sc architectures Message-ID: <20140327015125.GE3075@redacted.bos.redhat.com> MIME-Version: 1.0 User-Agent: Mutt/1.5.21 (2010-09-15) X-IsSubscribed: yes X-Original-Sender: kmcmarti@redhat.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c01::230 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@sourceware.org X-Google-Group-Id: 836684582541 Content-Disposition: inline Add similar single-stepping over atomic sequences support like other load-locked/store-conditional architectures (alpha, powerpc, arm, etc.) do. Verified the decode_masked_match, and decode_bcond works against the atomic sequences used in the Linux kernel atomic.h, and also gcc libatomic. Thanks to Richard Henderson for feedback on my initial attempt at this patch, and for the feedback from gdb-patches, which I hope I've addressed... 2014-03-26 Kyle McMartin gdb: * aarch64-tdep.c (aarch64_deal_with_atomic_sequence): New function. (aarch64_gdbarch_init): Handle single stepping of atomic sequences with aarch64_deal_with_atomic_sequence. gdb/testsuite: * gdb.arch/aarch64-atomic-inst.c: New file. * gdb.arch/aarch64-atomic-inst.exp: New file. --- a/gdb/aarch64-tdep.c +++ b/gdb/aarch64-tdep.c @@ -2509,6 +2509,83 @@ value_of_aarch64_user_reg (struct frame_info *frame, const void *baton) } +/* Implement the "software_single_step" gdbarch method, needed to + single step through atomic sequences on AArch64. */ + +static int +aarch64_software_single_step (struct frame_info *frame) +{ + struct gdbarch *gdbarch = get_frame_arch (frame); + struct address_space *aspace = get_frame_address_space (frame); + enum bfd_endian byte_order = gdbarch_byte_order (gdbarch); + const int insn_size = 4; + const int atomic_sequence_length = 16; /* Instruction sequence length. */ + CORE_ADDR pc = get_frame_pc (frame); + CORE_ADDR breaks[2] = { -1, -1 }; + CORE_ADDR loc = pc; + CORE_ADDR closing_insn = 0; + uint32_t insn = read_memory_unsigned_integer (loc, insn_size, byte_order); + int index; + int insn_count; + int bc_insn_count = 0; /* Conditional branch instruction count. */ + int last_breakpoint = 0; /* Defaults to 0 (no breakpoints placed). */ + + /* Look for a Load Exclusive instruction which begins the sequence. */ + if (!decode_masked_match (insn, 0x3fc00000, 0x08400000)) + return 0; + + for (insn_count = 0; insn_count < atomic_sequence_length; ++insn_count) + { + int32_t offset; + unsigned cond; + + loc += insn_size; + insn = read_memory_unsigned_integer (loc, insn_size, byte_order); + + /* Check if the instruction is a conditional branch. */ + if (decode_bcond (loc, insn, &cond, &offset)) + { + + if (bc_insn_count >= 1) + return 0; + + /* It is, so we'll try to set a breakpoint at the destination. */ + breaks[1] = loc + offset; + + bc_insn_count++; + last_breakpoint++; + } + + /* Look for the Store Exclusive which closes the atomic sequence. */ + if (decode_masked_match (insn, 0x3fc00000, 0x08000000)) + { + closing_insn = loc; + break; + } + } + + /* We didn't find a closing Store Exclusive instruction, fall back. */ + if (!closing_insn) + return 0; + + /* Insert breakpoint after the end of the atomic sequence. */ + breaks[0] = loc + insn_size; + + /* Check for duplicated breakpoints, and also check that the second + breakpoint is not within the atomic sequence. */ + if (last_breakpoint + && (breaks[1] == breaks[0] + || (breaks[1] >= pc && breaks[1] <= closing_insn))) + last_breakpoint = 0; + + /* Insert the breakpoint at the end of the sequence, and one at the + destination of the conditional branch, if it exists. */ + for (index = 0; index <= last_breakpoint; index++) + insert_single_step_breakpoint (gdbarch, aspace, breaks[index]); + + return 1; +} + /* Initialize the current architecture based on INFO. If possible, re-use an architecture from ARCHES, which is a list of architectures already created during this debugging session. @@ -2624,6 +2701,7 @@ aarch64_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches) set_gdbarch_breakpoint_from_pc (gdbarch, aarch64_breakpoint_from_pc); set_gdbarch_cannot_step_breakpoint (gdbarch, 1); set_gdbarch_have_nonsteppable_watchpoint (gdbarch, 1); + set_gdbarch_software_single_step (gdbarch, aarch64_software_single_step); /* Information about registers, etc. */ set_gdbarch_sp_regnum (gdbarch, AARCH64_SP_REGNUM); --- /dev/null +++ b/gdb/testsuite/gdb.arch/aarch64-atomic-inst.c @@ -0,0 +1,50 @@ +/* This file is part of GDB, the GNU debugger. + + Copyright 2008-2014 Free Software Foundation, Inc. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ + +#include + +int main() +{ + unsigned long tmp, cond; + unsigned long dword = 0; + + /* Test that we can step over ldxr/stxr. This sequence should step from + ldxr to the following __asm __volatile. */ + __asm __volatile ("1: ldxr %0,%2\n" \ + " cmp %0,#1\n" \ + " b.eq out\n" \ + " add %0,%0,1\n" \ + " stxr %w1,%0,%2\n" \ + " cbnz %w1,1b" \ + : "=&r" (tmp), "=&r" (cond), "+Q" (dword) \ + : : "memory"); + + /* This sequence should take the conditional branch and step from ldxr + to the return dword line. */ + __asm __volatile ("1: ldxr %0,%2\n" \ + " cmp %0,#1\n" \ + " b.eq out\n" \ + " add %0,%0,1\n" \ + " stxr %w1,%0,%2\n" \ + " cbnz %w1,1b\n" \ + : "=&r" (tmp), "=&r" (cond), "+Q" (dword) \ + : : "memory"); + + dword = -1; +__asm __volatile ("out:\n"); + return dword; +} --- /dev/null +++ b/gdb/testsuite/gdb.arch/aarch64-atomic-inst.exp @@ -0,0 +1,58 @@ +# Copyright 2008-2014 Free Software Foundation, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# +# This file is part of the gdb testsuite. + +# Test single stepping through atomic sequences beginning with +# a ldxr instruction and ending with a stxr instruction. + +if {![istarget "aarch64*"]} { + verbose "Skipping testing of aarch64 single stepping over atomic sequences." + return +} + +set testfile "aarch64-atomic-inst" +set srcfile ${testfile}.c +set binfile ${objdir}/${subdir}/${testfile} +set compile_flags {debug quiet} + +if { [gdb_compile "${srcdir}/${subdir}/${srcfile}" "${binfile}" executable $compile_flags] != "" } { + unsupported "Testcase compile failed." + return -1 +} + +gdb_exit +gdb_start +gdb_reinitialize_dir $srcdir/$subdir +gdb_load ${binfile} + +if ![runto_main] then { + perror "Couldn't run to breakpoint" + continue +} + +set bp1 [gdb_get_line_number "ldxr"] +gdb_breakpoint "$bp1" "Breakpoint $decimal at $hex" \ + "Set the breakpoint at the start of the sequence" + +gdb_test continue "Continuing.*Breakpoint $decimal.*" \ + "Continue until breakpoint" + +gdb_test next ".*__asm __volatile.*" \ + "Step through the ldxr/stxr sequence" + +gdb_test next ".*return dword.*" \ + "Stepped through sequence through conditional branch"