From patchwork Fri Mar 2 07:12:40 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 7055 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id EB69623E4A for ; Fri, 2 Mar 2012 07:13:09 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 9978BA184A6 for ; Fri, 2 Mar 2012 07:13:09 +0000 (UTC) Received: by mail-iy0-f180.google.com with SMTP id e36so2587506iag.11 for ; Thu, 01 Mar 2012 23:13:09 -0800 (PST) Received: from mr.google.com ([10.50.170.41]) by 10.50.170.41 with SMTP id aj9mr860997igc.0.1330672389465 (num_hops = 1); Thu, 01 Mar 2012 23:13:09 -0800 (PST) MIME-Version: 1.0 Received: by 10.50.170.41 with SMTP id aj9mr712211igc.0.1330672389395; Thu, 01 Mar 2012 23:13:09 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.53.18 with SMTP id k18csp3347ibg; Thu, 1 Mar 2012 23:13:09 -0800 (PST) Received: by 10.236.154.137 with SMTP id h9mr9313441yhk.91.1330672388418; Thu, 01 Mar 2012 23:13:08 -0800 (PST) Received: from e3.ny.us.ibm.com (e3.ny.us.ibm.com. [32.97.182.143]) by mx.google.com with ESMTPS id b3si2374938anh.199.2012.03.01.23.13.08 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 01 Mar 2012 23:13:08 -0800 (PST) Received-SPF: pass (google.com: domain of jstultz@us.ibm.com designates 32.97.182.143 as permitted sender) client-ip=32.97.182.143; Authentication-Results: mx.google.com; spf=pass (google.com: domain of jstultz@us.ibm.com designates 32.97.182.143 as permitted sender) smtp.mail=jstultz@us.ibm.com Received: from /spool/local by e3.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 2 Mar 2012 02:13:07 -0500 Received: from d01dlp01.pok.ibm.com (9.56.224.56) by e3.ny.us.ibm.com (192.168.1.103) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 2 Mar 2012 02:13:01 -0500 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by d01dlp01.pok.ibm.com (Postfix) with ESMTP id 847C738C8056; Fri, 2 Mar 2012 02:13:00 -0500 (EST) Received: from d01av01.pok.ibm.com (d01av01.pok.ibm.com [9.56.224.215]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q227D0t5346364; Fri, 2 Mar 2012 02:13:00 -0500 Received: from d01av01.pok.ibm.com (loopback [127.0.0.1]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q227CxT1014609; Fri, 2 Mar 2012 02:13:00 -0500 Received: from kernel.beaverton.ibm.com (kernel.beaverton.ibm.com [9.47.67.96]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q227Cxtk014403; Fri, 2 Mar 2012 02:12:59 -0500 Received: by kernel.beaverton.ibm.com (Postfix, from userid 1056) id AFAE4C03E0; Thu, 1 Mar 2012 23:12:58 -0800 (PST) From: John Stultz To: lkml Cc: John Stultz , Ingo Molnar , Thomas Gleixner , Eric Dumazet , Richard Cochran Subject: [PATCH 1/9] time: Condense timekeeper.xtime into xtime_sec Date: Thu, 1 Mar 2012 23:12:40 -0800 Message-Id: <1330672368-32290-2-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.7.3.2.146.gca209 In-Reply-To: <1330672368-32290-1-git-send-email-john.stultz@linaro.org> References: <1330672368-32290-1-git-send-email-john.stultz@linaro.org> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12030207-8974-0000-0000-000006E1F4C2 X-Gm-Message-State: ALoCoQmU4ETXbbb4rc6fHyS8Sh/uRStUlV8Y22jcfP0Pr4eweWup35RlfZdcpmpl+5ivKnt83VOV The timekeeper struct has a xtime_nsec, which keeps the sub-nanosecond remainder. This ends up being somewhat duplicative of the timekeeper.xtime.tv_nsec value, and we have to do extra work to keep them apart, copying the full nsec portion out and back in over and over. This patch simplifies some of the logic by taking the timekeeper xtime value and splitting it into timekeeper.xtime_sec and reuses the timekeeper.xtime_nsec for the sub-second portion (stored in higher res shifted nanoseconds). This simplifies some of the accumulation logic. And will allow for more accurate timekeeping once the vsyscall code is updated to use the shifted nanosecond remainder. CC: Ingo Molnar CC: Thomas Gleixner CC: Eric Dumazet CC: Richard Cochran Signed-off-by: John Stultz --- kernel/time/timekeeping.c | 170 ++++++++++++++++++++++++++++----------------- 1 files changed, 107 insertions(+), 63 deletions(-) diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index 403c2a0..5434c6e 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -39,8 +39,11 @@ struct timekeeper { /* Raw nano seconds accumulated per NTP interval. */ u32 raw_interval; - /* Clock shifted nano seconds remainder not stored in xtime.tv_nsec. */ + /* Current CLOCK_REALTIME time in seconds */ + u64 xtime_sec; + /* Clock shifted nano seconds */ u64 xtime_nsec; + /* Difference between accumulated time and NTP time in ntp * shifted nano seconds. */ s64 ntp_error; @@ -48,8 +51,6 @@ struct timekeeper { * ntp shifted nano seconds. */ int ntp_error_shift; - /* The current time */ - struct timespec xtime; /* * wall_to_monotonic is what we need to add to xtime (or xtime corrected * for sub jiffie times) to get to monotonic time. Monotonic is pegged @@ -87,6 +88,36 @@ __cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock); int __read_mostly timekeeping_suspended; +static inline void tk_normalize_xtime(struct timekeeper *tk) +{ + while (tk->xtime_nsec >= ((u64)NSEC_PER_SEC << tk->shift)) { + tk->xtime_nsec -= (u64)NSEC_PER_SEC << tk->shift; + tk->xtime_sec++; + } +} + +static struct timespec tk_xtime(struct timekeeper *tk) +{ + struct timespec ts; + + ts.tv_sec = tk->xtime_sec; + ts.tv_nsec = (long)(tk->xtime_nsec >> tk->shift); + return ts; +} + +static void tk_set_xtime(struct timekeeper *tk, const struct timespec *ts) +{ + tk->xtime_sec = ts->tv_sec; + tk->xtime_nsec = ts->tv_nsec << tk->shift; +} + + +static void tk_xtime_add(struct timekeeper *tk, const struct timespec *ts) +{ + tk->xtime_sec += ts->tv_sec; + tk->xtime_nsec += ts->tv_nsec << tk->shift; +} + /** * timekeeper_setup_internals - Set up internals to use clocksource clock. @@ -102,7 +133,9 @@ static void timekeeper_setup_internals(struct clocksource *clock) { cycle_t interval; u64 tmp, ntpinterval; + struct clocksource *old_clock; + old_clock = timekeeper.clock; timekeeper.clock = clock; clock->cycle_last = clock->read(clock); @@ -124,7 +157,14 @@ static void timekeeper_setup_internals(struct clocksource *clock) timekeeper.raw_interval = ((u64) interval * clock->mult) >> clock->shift; - timekeeper.xtime_nsec = 0; + /* if changing clocks, convert xtime_nsec shift units */ + if (old_clock) { + int shift_change = clock->shift - old_clock->shift; + if (shift_change < 0) + timekeeper.xtime_nsec >>= -shift_change; + else + timekeeper.xtime_nsec <<= shift_change; + } timekeeper.shift = clock->shift; timekeeper.ntp_error = 0; @@ -143,6 +183,7 @@ static inline s64 timekeeping_get_ns(void) { cycle_t cycle_now, cycle_delta; struct clocksource *clock; + s64 nsec; /* read clocksource: */ clock = timekeeper.clock; @@ -151,9 +192,8 @@ static inline s64 timekeeping_get_ns(void) /* calculate the delta since the last update_wall_time: */ cycle_delta = (cycle_now - clock->cycle_last) & clock->mask; - /* return delta convert to nanoseconds using ntp adjusted mult. */ - return clocksource_cyc2ns(cycle_delta, timekeeper.mult, - timekeeper.shift); + nsec = cycle_delta * timekeeper.mult + timekeeper.xtime_nsec; + return nsec >> timekeeper.shift; } static inline s64 timekeeping_get_ns_raw(void) @@ -175,11 +215,13 @@ static inline s64 timekeeping_get_ns_raw(void) /* must hold write on timekeeper.lock */ static void timekeeping_update(bool clearntp) { + struct timespec xt; if (clearntp) { timekeeper.ntp_error = 0; ntp_clear(); } - update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic, + xt = tk_xtime(&timekeeper); + update_vsyscall(&xt, &timekeeper.wall_to_monotonic, timekeeper.clock, timekeeper.mult); } @@ -189,7 +231,7 @@ void timekeeping_leap_insert(int leapsecond) unsigned long flags; write_seqlock_irqsave(&timekeeper.lock, flags); - timekeeper.xtime.tv_sec += leapsecond; + timekeeper.xtime_sec += leapsecond; timekeeper.wall_to_monotonic.tv_sec -= leapsecond; timekeeping_update(false); write_sequnlock_irqrestore(&timekeeper.lock, flags); @@ -214,13 +256,12 @@ static void timekeeping_forward_now(void) cycle_delta = (cycle_now - clock->cycle_last) & clock->mask; clock->cycle_last = cycle_now; - nsec = clocksource_cyc2ns(cycle_delta, timekeeper.mult, - timekeeper.shift); + timekeeper.xtime_nsec += cycle_delta * timekeeper.mult; /* If arch requires, add in gettimeoffset() */ - nsec += arch_gettimeoffset(); + timekeeper.xtime_nsec += arch_gettimeoffset() << timekeeper.shift; - timespec_add_ns(&timekeeper.xtime, nsec); + tk_normalize_xtime(&timekeeper); nsec = clocksource_cyc2ns(cycle_delta, clock->mult, clock->shift); timespec_add_ns(&timekeeper.raw_time, nsec); @@ -235,15 +276,15 @@ static void timekeeping_forward_now(void) void getnstimeofday(struct timespec *ts) { unsigned long seq; - s64 nsecs; + s64 nsecs = 0; WARN_ON(timekeeping_suspended); do { seq = read_seqbegin(&timekeeper.lock); - *ts = timekeeper.xtime; - nsecs = timekeeping_get_ns(); + ts->tv_sec = timekeeper.xtime_sec; + ts->tv_nsec = timekeeping_get_ns(); /* If arch requires, add in gettimeoffset() */ nsecs += arch_gettimeoffset(); @@ -264,11 +305,10 @@ ktime_t ktime_get(void) do { seq = read_seqbegin(&timekeeper.lock); - secs = timekeeper.xtime.tv_sec + + secs = timekeeper.xtime_sec + timekeeper.wall_to_monotonic.tv_sec; - nsecs = timekeeper.xtime.tv_nsec + + nsecs = timekeeping_get_ns() + timekeeper.wall_to_monotonic.tv_nsec; - nsecs += timekeeping_get_ns(); /* If arch requires, add in gettimeoffset() */ nsecs += arch_gettimeoffset(); @@ -293,22 +333,21 @@ void ktime_get_ts(struct timespec *ts) { struct timespec tomono; unsigned int seq; - s64 nsecs; WARN_ON(timekeeping_suspended); do { seq = read_seqbegin(&timekeeper.lock); - *ts = timekeeper.xtime; + ts->tv_sec = timekeeper.xtime_sec; + ts->tv_nsec = timekeeping_get_ns(); tomono = timekeeper.wall_to_monotonic; - nsecs = timekeeping_get_ns(); /* If arch requires, add in gettimeoffset() */ - nsecs += arch_gettimeoffset(); + ts->tv_nsec += arch_gettimeoffset(); } while (read_seqretry(&timekeeper.lock, seq)); set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec, - ts->tv_nsec + tomono.tv_nsec + nsecs); + ts->tv_nsec + tomono.tv_nsec); } EXPORT_SYMBOL_GPL(ktime_get_ts); @@ -336,7 +375,8 @@ void getnstime_raw_and_real(struct timespec *ts_raw, struct timespec *ts_real) seq = read_seqbegin(&timekeeper.lock); *ts_raw = timekeeper.raw_time; - *ts_real = timekeeper.xtime; + ts_real->tv_sec = timekeeper.xtime_sec; + ts_real->tv_nsec = 0; nsecs_raw = timekeeping_get_ns_raw(); nsecs_real = timekeeping_get_ns(); @@ -379,7 +419,7 @@ EXPORT_SYMBOL(do_gettimeofday); */ int do_settimeofday(const struct timespec *tv) { - struct timespec ts_delta; + struct timespec ts_delta, xt; unsigned long flags; if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC) @@ -389,12 +429,15 @@ int do_settimeofday(const struct timespec *tv) timekeeping_forward_now(); - ts_delta.tv_sec = tv->tv_sec - timekeeper.xtime.tv_sec; - ts_delta.tv_nsec = tv->tv_nsec - timekeeper.xtime.tv_nsec; + xt = tk_xtime(&timekeeper); + ts_delta.tv_sec = tv->tv_sec - xt.tv_sec; + ts_delta.tv_nsec = tv->tv_nsec - xt.tv_nsec; + timekeeper.wall_to_monotonic = timespec_sub(timekeeper.wall_to_monotonic, ts_delta); - timekeeper.xtime = *tv; + tk_set_xtime(&timekeeper, tv); + timekeeping_update(true); write_sequnlock_irqrestore(&timekeeper.lock, flags); @@ -425,7 +468,8 @@ int timekeeping_inject_offset(struct timespec *ts) timekeeping_forward_now(); - timekeeper.xtime = timespec_add(timekeeper.xtime, *ts); + + tk_xtime_add(&timekeeper, ts); timekeeper.wall_to_monotonic = timespec_sub(timekeeper.wall_to_monotonic, *ts); @@ -601,14 +645,12 @@ void __init timekeeping_init(void) clock->enable(clock); timekeeper_setup_internals(clock); - timekeeper.xtime.tv_sec = now.tv_sec; - timekeeper.xtime.tv_nsec = now.tv_nsec; + tk_set_xtime(&timekeeper, &now); timekeeper.raw_time.tv_sec = 0; timekeeper.raw_time.tv_nsec = 0; - if (boot.tv_sec == 0 && boot.tv_nsec == 0) { - boot.tv_sec = timekeeper.xtime.tv_sec; - boot.tv_nsec = timekeeper.xtime.tv_nsec; - } + if (boot.tv_sec == 0 && boot.tv_nsec == 0) + boot = tk_xtime(&timekeeper); + set_normalized_timespec(&timekeeper.wall_to_monotonic, -boot.tv_sec, -boot.tv_nsec); timekeeper.total_sleep_time.tv_sec = 0; @@ -634,7 +676,7 @@ static void __timekeeping_inject_sleeptime(struct timespec *delta) return; } - timekeeper.xtime = timespec_add(timekeeper.xtime, *delta); + tk_xtime_add(&timekeeper, delta); timekeeper.wall_to_monotonic = timespec_sub(timekeeper.wall_to_monotonic, *delta); timekeeper.total_sleep_time = timespec_add( @@ -731,7 +773,7 @@ static int timekeeping_suspend(void) * try to compensate so the difference in system time * and persistent_clock time stays close to constant. */ - delta = timespec_sub(timekeeper.xtime, timekeeping_suspend_time); + delta = timespec_sub(tk_xtime(&timekeeper), timekeeping_suspend_time); delta_delta = timespec_sub(delta, old_delta); if (abs(delta_delta.tv_sec) >= 2) { /* @@ -963,7 +1005,7 @@ static cycle_t logarithmic_accumulation(cycle_t offset, int shift) timekeeper.xtime_nsec += timekeeper.xtime_interval << shift; while (timekeeper.xtime_nsec >= nsecps) { timekeeper.xtime_nsec -= nsecps; - timekeeper.xtime.tv_sec++; + timekeeper.xtime_sec++; second_overflow(); } @@ -997,6 +1039,7 @@ static void update_wall_time(void) cycle_t offset; int shift = 0, maxshift; unsigned long flags; + s64 remainder; write_seqlock_irqsave(&timekeeper.lock, flags); @@ -1011,8 +1054,6 @@ static void update_wall_time(void) #else offset = (clock->read(clock) - clock->cycle_last) & clock->mask; #endif - timekeeper.xtime_nsec = (s64)timekeeper.xtime.tv_nsec << - timekeeper.shift; /* * With NO_HZ we may have to accumulate many cycle_intervals @@ -1058,25 +1099,29 @@ static void update_wall_time(void) timekeeper.ntp_error += neg << timekeeper.ntp_error_shift; } - /* - * Store full nanoseconds into xtime after rounding it up and - * add the remainder to the error difference. - */ - timekeeper.xtime.tv_nsec = ((s64)timekeeper.xtime_nsec >> - timekeeper.shift) + 1; - timekeeper.xtime_nsec -= (s64)timekeeper.xtime.tv_nsec << - timekeeper.shift; - timekeeper.ntp_error += timekeeper.xtime_nsec << - timekeeper.ntp_error_shift; + * Store only full nanoseconds into xtime_nsec after rounding + * it up and add the remainder to the error difference. + * XXX - This is necessary to avoid small 1ns inconsistnecies caused + * by truncating the remainder in vsyscalls. However, it causes + * additional work to be done in timekeeping_adjust(). Once + * the vsyscall implementations are converted to use xtime_nsec + * (shifted nanoseconds), this can be killed. + */ + remainder = timekeeper.xtime_nsec & ((1<= NSEC_PER_SEC)) { - timekeeper.xtime.tv_nsec -= NSEC_PER_SEC; - timekeeper.xtime.tv_sec++; + if (unlikely(timekeeper.xtime_nsec >= + ((u64)NSEC_PER_SEC << timekeeper.shift))) { + timekeeper.xtime_nsec -= (u64)NSEC_PER_SEC << timekeeper.shift; + timekeeper.xtime_sec++; second_overflow(); } @@ -1125,21 +1170,20 @@ void get_monotonic_boottime(struct timespec *ts) { struct timespec tomono, sleep; unsigned int seq; - s64 nsecs; WARN_ON(timekeeping_suspended); do { seq = read_seqbegin(&timekeeper.lock); - *ts = timekeeper.xtime; + ts->tv_sec = timekeeper.xtime_sec; + ts->tv_nsec = timekeeping_get_ns(); tomono = timekeeper.wall_to_monotonic; sleep = timekeeper.total_sleep_time; - nsecs = timekeeping_get_ns(); } while (read_seqretry(&timekeeper.lock, seq)); set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec + sleep.tv_sec, - ts->tv_nsec + tomono.tv_nsec + sleep.tv_nsec + nsecs); + ts->tv_nsec + tomono.tv_nsec + sleep.tv_nsec); } EXPORT_SYMBOL_GPL(get_monotonic_boottime); @@ -1172,13 +1216,13 @@ EXPORT_SYMBOL_GPL(monotonic_to_bootbased); unsigned long get_seconds(void) { - return timekeeper.xtime.tv_sec; + return timekeeper.xtime_sec; } EXPORT_SYMBOL(get_seconds); struct timespec __current_kernel_time(void) { - return timekeeper.xtime; + return tk_xtime(&timekeeper); } struct timespec current_kernel_time(void) @@ -1189,7 +1233,7 @@ struct timespec current_kernel_time(void) do { seq = read_seqbegin(&timekeeper.lock); - now = timekeeper.xtime; + now = tk_xtime(&timekeeper); } while (read_seqretry(&timekeeper.lock, seq)); return now; @@ -1204,7 +1248,7 @@ struct timespec get_monotonic_coarse(void) do { seq = read_seqbegin(&timekeeper.lock); - now = timekeeper.xtime; + now = tk_xtime(&timekeeper); mono = timekeeper.wall_to_monotonic; } while (read_seqretry(&timekeeper.lock, seq)); @@ -1239,7 +1283,7 @@ void get_xtime_and_monotonic_and_sleep_offset(struct timespec *xtim, do { seq = read_seqbegin(&timekeeper.lock); - *xtim = timekeeper.xtime; + *xtim = tk_xtime(&timekeeper); *wtom = timekeeper.wall_to_monotonic; *sleep = timekeeper.total_sleep_time; } while (read_seqretry(&timekeeper.lock, seq));