From patchwork Thu Oct 17 12:35:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudeep Holla X-Patchwork-Id: 176591 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp859404ill; Thu, 17 Oct 2019 05:35:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqywbfq0jZO1pISLRB103aS9+q/80prpIqiuM0XVq6FQZsHOXkpzURUoKM8QaafWoHJrCFg9 X-Received: by 2002:a17:906:3488:: with SMTP id g8mr3203383ejb.162.1571315754535; Thu, 17 Oct 2019 05:35:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571315754; cv=none; d=google.com; s=arc-20160816; b=JtVMEx7Uq43bYWzKl9y9tsaGdSnVFMZQ0xZMXZcj4rEn+2L7TaY7O5LtRvh1EXrBjH UsWADbxcTNGsu5+D4l44Zh514QHA5jZHogfNbuZptFifSXMwzBNAwqN/dUajEJX+Co9t wKNxcTz1xObDFI2u4iGsA/GOi4hzd+bTBI5NU1zckFpSqupZRE2sv48oWrYacEj8v2j3 1TmIENaDQlaSrgT+Ac03UhEz6JXoVy//JNt1Ut99vOvAwwK7LdXMb5mTPUDTCXLrV53n Y7I1aDAIyD1cw2nfn5J4cvUMOMwqnVJ4WhhmdNZH3XJ9ZPdcMYRAtWRA8M3drdPVMpUx BTgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=UHaWrihvnh+qYnjiu9I5C/hRZ7+GheRf1ijK9eMkOqQ=; b=FQbjcvUH6DMsobFaEFckDDTT+5aL7Hy6gi7k+5LDZAMDg1OPTE97v3vYUMf0fEf47T xVaJlCvLOt/wTDwFgcqRVroBXSsaY+yuox5PorxKI1Cqg/cm4fVm1L+rgqGIyAgc6YNa UqR4tNVsaVJJr5WJ7AFWN3IuTZEJHVI0ze8895m9yBiUWPhB++VBY3tR99cVJloqawXI E/87SfYJqmVKMsSnY28ZRvtG4MPowa0x3VT743YMpXU3WnaquryagxNQO7mcC9ad6GMj RMAjqBY+V3802UgfgbxKKVnwdwVar5kngfAkMqhur6Dr11YSxG/ntU+yCV+tBVvMsUdp JBbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a15si1466798eda.222.2019.10.17.05.35.54; Thu, 17 Oct 2019 05:35:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2502434AbfJQMfx (ORCPT + 10 others); Thu, 17 Oct 2019 08:35:53 -0400 Received: from [217.140.110.172] ([217.140.110.172]:41520 "EHLO foss.arm.com" rhost-flags-FAIL-FAIL-OK-OK) by vger.kernel.org with ESMTP id S2502412AbfJQMfx (ORCPT ); Thu, 17 Oct 2019 08:35:53 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3996E1D31; Thu, 17 Oct 2019 05:35:20 -0700 (PDT) Received: from usa.arm.com (e107155-lin.cambridge.arm.com [10.1.196.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 52D013F718; Thu, 17 Oct 2019 05:35:19 -0700 (PDT) From: Sudeep Holla To: Viresh Kumar Cc: Sudeep Holla , "Rafael J . Wysocki" , linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, nico@fluxnic.net Subject: [PATCH v2 5/5] cpufreq: vexpress-spc: fix some coding style issues Date: Thu, 17 Oct 2019 13:35:08 +0100 Message-Id: <20191017123508.26130-6-sudeep.holla@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191017123508.26130-1-sudeep.holla@arm.com> References: <20191017123508.26130-1-sudeep.holla@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Fix the following checkpatch checks/warnings: CHECK: Unnecessary parentheses around the code CHECK: Alignment should match open parenthesis CHECK: Prefer kernel type 'u32' over 'uint32_t' WARNING: Missing a blank line after declarations Signed-off-by: Sudeep Holla --- drivers/cpufreq/vexpress-spc-cpufreq.c | 43 ++++++++++++-------------- 1 file changed, 20 insertions(+), 23 deletions(-) -- 2.17.1 diff --git a/drivers/cpufreq/vexpress-spc-cpufreq.c b/drivers/cpufreq/vexpress-spc-cpufreq.c index 81064430317f..8ecb2961be86 100644 --- a/drivers/cpufreq/vexpress-spc-cpufreq.c +++ b/drivers/cpufreq/vexpress-spc-cpufreq.c @@ -79,8 +79,8 @@ static unsigned int find_cluster_maxfreq(int cluster) for_each_online_cpu(j) { cpu_freq = per_cpu(cpu_last_req_freq, j); - if ((cluster == per_cpu(physical_cluster, j)) && - (max_freq < cpu_freq)) + if (cluster == per_cpu(physical_cluster, j) && + max_freq < cpu_freq) max_freq = cpu_freq; } @@ -188,22 +188,19 @@ static int ve_spc_cpufreq_set_target(struct cpufreq_policy *policy, freqs_new = freq_table[cur_cluster][index].frequency; if (is_bL_switching_enabled()) { - if ((actual_cluster == A15_CLUSTER) && - (freqs_new < clk_big_min)) { + if (actual_cluster == A15_CLUSTER && freqs_new < clk_big_min) new_cluster = A7_CLUSTER; - } else if ((actual_cluster == A7_CLUSTER) && - (freqs_new > clk_little_max)) { + else if (actual_cluster == A7_CLUSTER && + freqs_new > clk_little_max) new_cluster = A15_CLUSTER; - } } ret = ve_spc_cpufreq_set_rate(cpu, actual_cluster, new_cluster, freqs_new); - if (!ret) { + if (!ret) arch_set_freq_scale(policy->related_cpus, freqs_new, policy->cpuinfo.max_freq); - } return ret; } @@ -222,7 +219,8 @@ static inline u32 get_table_count(struct cpufreq_frequency_table *table) static inline u32 get_table_min(struct cpufreq_frequency_table *table) { struct cpufreq_frequency_table *pos; - uint32_t min_freq = ~0; + u32 min_freq = ~0; + cpufreq_for_each_entry(pos, table) if (pos->frequency < min_freq) min_freq = pos->frequency; @@ -233,7 +231,8 @@ static inline u32 get_table_min(struct cpufreq_frequency_table *table) static inline u32 get_table_max(struct cpufreq_frequency_table *table) { struct cpufreq_frequency_table *pos; - uint32_t max_freq = 0; + u32 max_freq = 0; + cpufreq_for_each_entry(pos, table) if (pos->frequency > max_freq) max_freq = pos->frequency; @@ -255,14 +254,11 @@ static int merge_cluster_tables(void) freq_table[MAX_CLUSTERS] = table; /* Add in reverse order to get freqs in increasing order */ - for (i = MAX_CLUSTERS - 1; i >= 0; i--) { + for (i = MAX_CLUSTERS - 1; i >= 0; i--) for (j = 0; freq_table[i][j].frequency != CPUFREQ_TABLE_END; - j++) { - table[k].frequency = VIRT_FREQ(i, - freq_table[i][j].frequency); - k++; - } - } + j++, k++) + table[k].frequency = + VIRT_FREQ(i, freq_table[i][j].frequency); table[k].driver_data = k; table[k].frequency = CPUFREQ_TABLE_END; @@ -332,13 +328,13 @@ static int _get_cluster_clk_and_freq_table(struct device *cpu_dev, return 0; dev_err(cpu_dev, "%s: Failed to get clk for cpu: %d, cluster: %d\n", - __func__, cpu_dev->id, cluster); + __func__, cpu_dev->id, cluster); ret = PTR_ERR(clk[cluster]); dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]); out: dev_err(cpu_dev, "%s: Failed to get data for cluster: %d\n", __func__, - cluster); + cluster); return ret; } @@ -406,7 +402,7 @@ static int ve_spc_cpufreq_init(struct cpufreq_policy *policy) cpu_dev = get_cpu_device(policy->cpu); if (!cpu_dev) { pr_err("%s: failed to get cpu%d device\n", __func__, - policy->cpu); + policy->cpu); return -ENODEV; } @@ -432,7 +428,8 @@ static int ve_spc_cpufreq_init(struct cpufreq_policy *policy) dev_pm_opp_of_register_em(policy->cpus); if (is_bL_switching_enabled()) - per_cpu(cpu_last_req_freq, policy->cpu) = clk_get_cpu_rate(policy->cpu); + per_cpu(cpu_last_req_freq, policy->cpu) = + clk_get_cpu_rate(policy->cpu); dev_info(cpu_dev, "%s: CPU %d initialized\n", __func__, policy->cpu); return 0; @@ -451,7 +448,7 @@ static int ve_spc_cpufreq_exit(struct cpufreq_policy *policy) cpu_dev = get_cpu_device(policy->cpu); if (!cpu_dev) { pr_err("%s: failed to get cpu%d device\n", __func__, - policy->cpu); + policy->cpu); return -ENODEV; }