From patchwork Thu Apr 28 11:18:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 66890 Delivered-To: patch@linaro.org Received: by 10.140.93.198 with SMTP id d64csp150746qge; Thu, 28 Apr 2016 04:18:12 -0700 (PDT) X-Received: by 10.98.103.28 with SMTP id b28mr19750165pfc.155.1461842292067; Thu, 28 Apr 2016 04:18:12 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r65si9855026pfi.101.2016.04.28.04.18.11; Thu, 28 Apr 2016 04:18:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752655AbcD1LSI (ORCPT + 30 others); Thu, 28 Apr 2016 07:18:08 -0400 Received: from mail-wm0-f42.google.com ([74.125.82.42]:33869 "EHLO mail-wm0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750923AbcD1LSE (ORCPT ); Thu, 28 Apr 2016 07:18:04 -0400 Received: by mail-wm0-f42.google.com with SMTP id v200so4019627wmv.1 for ; Thu, 28 Apr 2016 04:18:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=MRh1ieC2Z2quM7ae6Q3c41L6RflL5K1Dxx/6RHfy9jg=; b=QiyabzOvsvRvqm82+JgUOJdJ/TW4ureM2YnJSnlTY7lulF+HuAwhwKlOnJBkTRWc54 KSnz/W1a1u5I4aSrr39C+UPah2s6PjSfdUoeIhqC0CytNF9cYbL2T/pyJSzprZmh8YlJ 09n3IDqiQp60eHhnv4NLELPNLTszVTZcEmp6E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=MRh1ieC2Z2quM7ae6Q3c41L6RflL5K1Dxx/6RHfy9jg=; b=f4kHjIb/0CWocQ7rtB+eIgNwCnDzOm2IzzT591/ytIcuybBxVpiZzZ0Y/ahbITWzzV VHVlA3A79T8N4WBbxxsNpookGrhfGMeG8aq0MBtOccP2VnmTVJ8n/9gN7wQ+4e4AbSVg fwaskirw1E4BUbFT/QRwiPoxPEARSQH6mWfO3IJwNHF/PIXOpN9WQ3cagiM+qAQbGNO7 snEwQ9nxEEx5qgQDkYjkE6ep0ua0aIwVEAJClew5lGOzc/cZdK1N1o78KoPrx4cXsRDD XXcOreIBcvpayufH4EegeRWzrikqDTPxF8INu/73A+yx4z9tcKuj+eyc9/IBCqsuxtzG 1hfQ== X-Gm-Message-State: AOPr4FV7HehkrdeyGSUpXZLYi069uBz03gow+//Zlj+71d4/l4Y4bKoOCAbcqByDzBxC7Wdp X-Received: by 10.194.117.7 with SMTP id ka7mr14887828wjb.116.1461842283376; Thu, 28 Apr 2016 04:18:03 -0700 (PDT) Received: from vingu-laptop ([2a01:e35:8bd4:7750:a851:f975:8a3d:a2d0]) by smtp.gmail.com with ESMTPSA id 186sm34599587wmk.2.2016.04.28.04.18.01 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Thu, 28 Apr 2016 04:18:02 -0700 (PDT) Date: Thu, 28 Apr 2016 13:18:00 +0200 From: Vincent Guittot To: Peter Zijlstra Cc: Yuyang Du , mingo@kernel.org, linux-kernel@vger.kernel.org, bsegall@google.com, pjt@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, lizefan@huawei.com, umgwanakikbuti@gmail.com Subject: Re: [PATCH v3 5/6] sched/fair: Rename scale_load() and scale_load_down() Message-ID: <20160428111800.GA30218@vingu-laptop> References: <1459829551-21625-1-git-send-email-yuyang.du@intel.com> <1459829551-21625-6-git-send-email-yuyang.du@intel.com> <20160428091919.GW3430@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20160428091919.GW3430@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Le Thursday 28 Apr 2016 à 11:19:19 (+0200), Peter Zijlstra a écrit : > On Tue, Apr 05, 2016 at 12:12:30PM +0800, Yuyang Du wrote: > > Rename scale_load() and scale_load_down() to user_to_kernel_load() > > and kernel_to_user_load() respectively, to allow the names to bear > > what they are really about. > > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -189,7 +189,7 @@ static void __update_inv_weight(struct load_weight *lw) > > if (likely(lw->inv_weight)) > > return; > > > > - w = scale_load_down(lw->weight); > > + w = kernel_to_user_load(lw->weight); > > > > if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST)) > > lw->inv_weight = 1; > > @@ -213,7 +213,7 @@ static void __update_inv_weight(struct load_weight *lw) > > */ > > static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw) > > { > > - u64 fact = scale_load_down(weight); > > + u64 fact = kernel_to_user_load(weight); > > int shift = WMULT_SHIFT; > > > > __update_inv_weight(lw); > > @@ -6952,10 +6952,11 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s > > */ > > if (busiest->group_type == group_overloaded && > > local->group_type == group_overloaded) { > > + unsigned long min_cpu_load = > > + kernel_to_user_load(NICE_0_LOAD) * busiest->group_capacity; > > load_above_capacity = busiest->sum_nr_running * NICE_0_LOAD; > > - if (load_above_capacity > scale_load(busiest->group_capacity)) > > - load_above_capacity -= > > - scale_load(busiest->group_capacity); > > + if (load_above_capacity > min_cpu_load) > > + load_above_capacity -= min_cpu_load; > > else > > load_above_capacity = ~0UL; > > } > > Except these 3 really are not about user/kernel visible fixed point > ranges _at_all_... :/ While trying to optimize the calcultaion of min_cpu_load, i have broken evrything it should be : > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0b6659d..3411eb7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6953,7 +6953,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s if (busiest->group_type == group_overloaded && local->group_type == group_overloaded) { unsigned long min_cpu_load = - kernel_to_user_load(NICE_0_LOAD) * busiest->group_capacity; + busiest->group_capacity * NICE_0_LOAD / SCHED_CAPACITY_SCALE; load_above_capacity = busiest->sum_nr_running * NICE_0_LOAD; if (load_above_capacity > min_cpu_load) load_above_capacity -= min_cpu_load;