From patchwork Sun Feb 8 12:02:36 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 44496 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-we0-f200.google.com (mail-we0-f200.google.com [74.125.82.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 80B7D21521 for ; Sun, 8 Feb 2015 12:03:04 +0000 (UTC) Received: by mail-we0-f200.google.com with SMTP id m14sf15480345wev.3 for ; Sun, 08 Feb 2015 04:03:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=w9lCG5NH4MSuDkQuzpZzJCV8wI73ajJwspvA2iYVnds=; b=jST/2Cij3Aop4i/9M/VbcqoGkdDZoN5bY30DHD6wU7NEMPXwH/FrU99WBr+OgmnkQ1 kSWqO9isTBA3mSVokEfRwB83tgfkdzvOLbqw75UrmNrekCI2S5C6aArpN9S0OPtier6M gcKyjo6WJBxJOA0rGKeVC5BsC1z7QGdtYQfnaosLtbOpTBBkkwkF+obb9uPdCMtOhZv2 O/T/didv1kliQq7kY4VZl2q4jcLDoM+NkcpbpH+bAIhOZIwGBfkbjp3E+v/AFyrqYH3O d4i0ds1t1VIQwXcAzkL/gZ4+YLdS0QeEcII6mkrnrNnJcbX8NciAie5OdkXC4Q4vgQWq HGZw== X-Gm-Message-State: ALoCoQmeYK6um8V23MsksTHQ87K9PfLkkkg4gSOWXZvGXcanjHAUNN+97lMI6D0brdXW7QfMQOQc X-Received: by 10.180.76.84 with SMTP id i20mr272146wiw.4.1423396983831; Sun, 08 Feb 2015 04:03:03 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.45.68 with SMTP id k4ls409257lam.85.gmail; Sun, 08 Feb 2015 04:03:03 -0800 (PST) X-Received: by 10.112.63.134 with SMTP id g6mr11316353lbs.52.1423396983530; Sun, 08 Feb 2015 04:03:03 -0800 (PST) Received: from mail-la0-f44.google.com (mail-la0-f44.google.com. [209.85.215.44]) by mx.google.com with ESMTPS id lb5si636569lab.136.2015.02.08.04.03.03 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Feb 2015 04:03:03 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) client-ip=209.85.215.44; Received: by labpn19 with SMTP id pn19so2704638lab.4 for ; Sun, 08 Feb 2015 04:03:03 -0800 (PST) X-Received: by 10.112.40.201 with SMTP id z9mr11304519lbk.117.1423396983070; Sun, 08 Feb 2015 04:03:03 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.35.133 with SMTP id h5csp2807411lbj; Sun, 8 Feb 2015 04:03:02 -0800 (PST) X-Received: by 10.68.227.201 with SMTP id sc9mr20504763pbc.19.1423396981366; Sun, 08 Feb 2015 04:03:01 -0800 (PST) Received: from mail-pa0-f50.google.com (mail-pa0-f50.google.com. [209.85.220.50]) by mx.google.com with ESMTPS id kp11si17823743pab.94.2015.02.08.04.03.00 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Feb 2015 04:03:01 -0800 (PST) Received-SPF: pass (google.com: domain of daniel.thompson@linaro.org designates 209.85.220.50 as permitted sender) client-ip=209.85.220.50; Received: by mail-pa0-f50.google.com with SMTP id rd3so27402376pab.9 for ; Sun, 08 Feb 2015 04:03:00 -0800 (PST) X-Received: by 10.66.193.33 with SMTP id hl1mr20166633pac.141.1423396979948; Sun, 08 Feb 2015 04:02:59 -0800 (PST) Received: from harvey.bri.st.com.com ([210.177.145.245]) by mx.google.com with ESMTPSA id sy2sm13304658pbc.8.2015.02.08.04.02.56 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Feb 2015 04:02:59 -0800 (PST) From: Daniel Thompson To: Thomas Gleixner , John Stultz Cc: Daniel Thompson , linux-kernel@vger.kernel.org, patches@linaro.org, linaro-kernel@lists.linaro.org, Sumit Semwal , Stephen Boyd , Steven Rostedt , Russell King , Will Deacon , Catalin Marinas Subject: [PATCH v4 1/5] sched_clock: Match scope of read and write seqcounts Date: Sun, 8 Feb 2015 20:02:36 +0800 Message-Id: <1423396960-4824-2-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1423396960-4824-1-git-send-email-daniel.thompson@linaro.org> References: <1421859236-19782-1-git-send-email-daniel.thompson@linaro.org> <1423396960-4824-1-git-send-email-daniel.thompson@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: daniel.thompson@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Currently the scope of the raw_write_seqcount_begin/end in sched_clock_register far exceeds the scope of the read section in sched_clock. This gives the impression of safety during cursory review but achieves little. Note that this is likely to be a latent issue at present because sched_clock_register() is typically called before we enable interrupts, however the issue does risk bugs being needlessly introduced as the code evolves. This patch fixes the problem by increasing the scope of the read locking performed by sched_clock() to cover all data modified by sched_clock_register. We also improve clarity by moving writes to struct clock_data that do not impact sched_clock() outside of the critical section. Signed-off-by: Daniel Thompson Cc: Russell King Cc: Will Deacon Cc: Catalin Marinas --- kernel/time/sched_clock.c | 25 +++++++++++-------------- 1 file changed, 11 insertions(+), 14 deletions(-) diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c index 01d2d15aa662..3d21a8719444 100644 --- a/kernel/time/sched_clock.c +++ b/kernel/time/sched_clock.c @@ -58,23 +58,21 @@ static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift) unsigned long long notrace sched_clock(void) { - u64 epoch_ns; - u64 epoch_cyc; - u64 cyc; + u64 cyc, res; unsigned long seq; - if (cd.suspended) - return cd.epoch_ns; - do { seq = raw_read_seqcount_begin(&cd.seq); - epoch_cyc = cd.epoch_cyc; - epoch_ns = cd.epoch_ns; + + res = cd.epoch_ns; + if (!cd.suspended) { + cyc = read_sched_clock(); + cyc = (cyc - cd.epoch_cyc) & sched_clock_mask; + res += cyc_to_ns(cyc, cd.mult, cd.shift); + } } while (read_seqcount_retry(&cd.seq, seq)); - cyc = read_sched_clock(); - cyc = (cyc - epoch_cyc) & sched_clock_mask; - return epoch_ns + cyc_to_ns(cyc, cd.mult, cd.shift); + return res; } /* @@ -124,10 +122,11 @@ void __init sched_clock_register(u64 (*read)(void), int bits, clocks_calc_mult_shift(&new_mult, &new_shift, rate, NSEC_PER_SEC, 3600); new_mask = CLOCKSOURCE_MASK(bits); + cd.rate = rate; /* calculate how many ns until we wrap */ wrap = clocks_calc_max_nsecs(new_mult, new_shift, 0, new_mask); - new_wrap_kt = ns_to_ktime(wrap - (wrap >> 3)); + cd.wrap_kt = ns_to_ktime(wrap - (wrap >> 3)); /* update epoch for new counter and update epoch_ns from old counter*/ new_epoch = read(); @@ -138,8 +137,6 @@ void __init sched_clock_register(u64 (*read)(void), int bits, raw_write_seqcount_begin(&cd.seq); read_sched_clock = read; sched_clock_mask = new_mask; - cd.rate = rate; - cd.wrap_kt = new_wrap_kt; cd.mult = new_mult; cd.shift = new_shift; cd.epoch_cyc = new_epoch;