From patchwork Mon Aug 10 09:42:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxim Uvarov X-Patchwork-Id: 52140 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f71.google.com (mail-la0-f71.google.com [209.85.215.71]) by patches.linaro.org (Postfix) with ESMTPS id 5CEAD2152A for ; Mon, 10 Aug 2015 09:43:04 +0000 (UTC) Received: by lagz9 with SMTP id z9sf40257443lag.3 for ; Mon, 10 Aug 2015 02:43:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:date :message-id:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=oTPHKHtHY2fT1X8jtHDfU/+E1ygCsPJcbQarAEUixdE=; b=R+AaZYoQVxhQlEF+zXXlttU4oPoDWpQdzc48nRNnI0k+NEQA1+UnBvHwGKFKsS/rie 24WVLExWOS+uMwKXqdZ7eSIln6qUbT3FqVbUggOq27lJO+qDThNc3MO+2YOqbYzZVU80 BrGg/Yv6K5yi35xofHK6H5c7G2yvKkeN7RMPojmmFRRRVesSeTDlwyjMN82SNb1uJsYq l2T6bbGDH0CDkx2ThtX5Sfcx51qRC+PEzY/AXxqLtQRtJWP0E71LU2dgym8c8PLlytLk acroBEN7RCSMioUgV2P07O7f44TjgT1ALCNughROvSMieZOxpL4ECBlC0gIqj8qgr3t3 KLkg== X-Gm-Message-State: ALoCoQmLTaRbJB0592juTfSXKGUBzCwhb3z64EZpB2frTGgNowOSzlV6owkRA+bKWIlmNflpnEL+ X-Received: by 10.180.81.70 with SMTP id y6mr2581832wix.2.1439199783355; Mon, 10 Aug 2015 02:43:03 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.19.7 with SMTP id a7ls73111lae.68.gmail; Mon, 10 Aug 2015 02:43:03 -0700 (PDT) X-Received: by 10.112.137.164 with SMTP id qj4mr19226000lbb.105.1439199783185; Mon, 10 Aug 2015 02:43:03 -0700 (PDT) Received: from mail-la0-f53.google.com (mail-la0-f53.google.com. [209.85.215.53]) by mx.google.com with ESMTPS id r1si9854045lah.103.2015.08.10.02.43.03 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 10 Aug 2015 02:43:03 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) client-ip=209.85.215.53; Received: by lahi9 with SMTP id i9so20141614lah.2 for ; Mon, 10 Aug 2015 02:43:03 -0700 (PDT) X-Received: by 10.112.199.133 with SMTP id jk5mr19946957lbc.32.1439199782963; Mon, 10 Aug 2015 02:43:02 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.7.198 with SMTP id l6csp1822664lba; Mon, 10 Aug 2015 02:43:02 -0700 (PDT) X-Received: by 10.140.201.200 with SMTP id w191mr38228311qha.50.1439199781566; Mon, 10 Aug 2015 02:43:01 -0700 (PDT) Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id 201si32607498qha.11.2015.08.10.02.43.00; Mon, 10 Aug 2015 02:43:01 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: by lists.linaro.org (Postfix, from userid 109) id A2CC861D67; Mon, 10 Aug 2015 09:43:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252.ec2.internal X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from ip-10-142-244-252.ec2.internal (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 8B05261D4B; Mon, 10 Aug 2015 09:42:55 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id EC6E261D4E; Mon, 10 Aug 2015 09:42:52 +0000 (UTC) Received: from mail-wi0-f170.google.com (mail-wi0-f170.google.com [209.85.212.170]) by lists.linaro.org (Postfix) with ESMTPS id 5A35461D3B for ; Mon, 10 Aug 2015 09:42:50 +0000 (UTC) Received: by wibhh20 with SMTP id hh20so142547891wib.0 for ; Mon, 10 Aug 2015 02:42:49 -0700 (PDT) X-Received: by 10.194.58.71 with SMTP id o7mr43861117wjq.82.1439199769577; Mon, 10 Aug 2015 02:42:49 -0700 (PDT) Received: from localhost.localdomain ([87.120.178.39]) by smtp.gmail.com with ESMTPSA id w8sm7493826wiy.10.2015.08.10.02.42.48 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Aug 2015 02:42:48 -0700 (PDT) From: Maxim Uvarov To: lng-odp@lists.linaro.org Date: Mon, 10 Aug 2015 12:42:41 +0300 Message-Id: <1439199761-25463-1-git-send-email-maxim.uvarov@linaro.org> X-Mailer: git-send-email 1.9.1 X-Topics: patch Subject: [lng-odp] [PATCH] linux-generic: move default cpumask functions to separate file X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: maxim.uvarov@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.53 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Keep functions which iterates with bit mask and more likely will not be accelerated by hw in odp_cpumask.c. And put functions for default cpu mask to odp_cpumask_def.c, which is more platform specific. That patch should simplify portability to other platforms when odp_cpumask.c will be inherit from linux generic. Signed-off-by: Maxim Uvarov --- platform/linux-generic/Makefile.am | 1 + platform/linux-generic/odp_cpumask.c | 38 ----------------------- platform/linux-generic/odp_cpumask_def.c | 52 ++++++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+), 38 deletions(-) create mode 100644 platform/linux-generic/odp_cpumask_def.c diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index ed4add5..bc750ff 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -134,6 +134,7 @@ __LIB__libodp_la_SOURCES = \ odp_buffer.c \ odp_classification.c \ odp_cpumask.c \ + odp_cpumask_def.c \ odp_crypto.c \ odp_errno.c \ odp_event.c \ diff --git a/platform/linux-generic/odp_cpumask.c b/platform/linux-generic/odp_cpumask.c index c28153b..b31e1ca 100644 --- a/platform/linux-generic/odp_cpumask.c +++ b/platform/linux-generic/odp_cpumask.c @@ -205,41 +205,3 @@ int odp_cpumask_next(const odp_cpumask_t *mask, int cpu) return cpu; return -1; } - -int odp_cpumask_def_worker(odp_cpumask_t *mask, int num) -{ - int ret, cpu, i; - cpu_set_t cpuset; - - ret = pthread_getaffinity_np(pthread_self(), - sizeof(cpu_set_t), &cpuset); - if (ret != 0) - ODP_ABORT("failed to read CPU affinity value\n"); - - odp_cpumask_zero(mask); - - /* - * If no user supplied number or it's too large, then attempt - * to use all CPUs - */ - if (0 == num || CPU_SETSIZE < num) - num = CPU_COUNT(&cpuset); - - /* build the mask, allocating down from highest numbered CPU */ - for (cpu = 0, i = CPU_SETSIZE - 1; i >= 0 && cpu < num; --i) { - if (CPU_ISSET(i, &cpuset)) { - odp_cpumask_set(mask, i); - cpu++; - } - } - - return cpu; -} - -int odp_cpumask_def_control(odp_cpumask_t *mask, int num ODP_UNUSED) -{ - odp_cpumask_zero(mask); - /* By default all control threads on CPU 0 */ - odp_cpumask_set(mask, 0); - return 1; -} diff --git a/platform/linux-generic/odp_cpumask_def.c b/platform/linux-generic/odp_cpumask_def.c new file mode 100644 index 0000000..a96218c --- /dev/null +++ b/platform/linux-generic/odp_cpumask_def.c @@ -0,0 +1,52 @@ +/* Copyright (c) 2013, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef _GNU_SOURCE +#define _GNU_SOURCE +#endif +#include +#include + +#include +#include + +int odp_cpumask_def_worker(odp_cpumask_t *mask, int num) +{ + int ret, cpu, i; + cpu_set_t cpuset; + + ret = pthread_getaffinity_np(pthread_self(), + sizeof(cpu_set_t), &cpuset); + if (ret != 0) + ODP_ABORT("failed to read CPU affinity value\n"); + + odp_cpumask_zero(mask); + + /* + * If no user supplied number or it's too large, then attempt + * to use all CPUs + */ + if (0 == num || CPU_SETSIZE < num) + num = CPU_COUNT(&cpuset); + + /* build the mask, allocating down from highest numbered CPU */ + for (cpu = 0, i = CPU_SETSIZE - 1; i >= 0 && cpu < num; --i) { + if (CPU_ISSET(i, &cpuset)) { + odp_cpumask_set(mask, i); + cpu++; + } + } + + return cpu; +} + +int odp_cpumask_def_control(odp_cpumask_t *mask, int num ODP_UNUSED) +{ + odp_cpumask_zero(mask); + /* By default all control threads on CPU 0 */ + odp_cpumask_set(mask, 0); + return 1; +}