From patchwork Wed Jan 14 22:48:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anders Roxell X-Patchwork-Id: 43138 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C6F4C240B9 for ; Wed, 14 Jan 2015 22:50:00 +0000 (UTC) Received: by mail-wi0-f197.google.com with SMTP id l15sf6473914wiw.0 for ; Wed, 14 Jan 2015 14:50:00 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:in-reply-to :references:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-original-sender :x-original-authentication-results:mailing-list; bh=1sZWPqyc7n/YN00JQj+BZndCKhq+ierroIVc1waeFUc=; b=DjHmc1HJPSw6KyMlUDTZaQxI01NnmbqbWcjU3SwXYvaS5chfOsb6yA3JhxX3rZ03M8 HuUFM5ip8JefvUi6JAT+glc/MWouBB6HH9uQgQhWZtF4UOAO4xDwxM7fykxRHfWF1e04 njmjJGwu7/Xm00kGInKoEMbWtpUiL7BzcqV42pE8NMatithOGR3SLtkY9l7YdG9bAUcp 3t/upLAXfhIZwqJkg2UWjeTegUPf776D6J2m46A4rwzv9uWOCixMP6zAVDS1TI/FwYYI 053pK+0EvVbCy0WXyXLY8bnX2vpOV3STvGzh08a0CqgfAhv7PcAuGiJ9SrF6WpIMzggG vSAg== X-Gm-Message-State: ALoCoQmB6ReoiiuZY1Y7PdiN4NXA1yEdB7gRoXHusJKUB/sFRj1bo8hP7EfcSOrZZDMWUtjZd4uu X-Received: by 10.180.19.42 with SMTP id b10mr3924194wie.0.1421275800022; Wed, 14 Jan 2015 14:50:00 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.207.3 with SMTP id ls3ls170197lac.46.gmail; Wed, 14 Jan 2015 14:49:59 -0800 (PST) X-Received: by 10.152.1.2 with SMTP id 2mr13131lai.89.1421275799849; Wed, 14 Jan 2015 14:49:59 -0800 (PST) Received: from mail-la0-f42.google.com (mail-la0-f42.google.com. [209.85.215.42]) by mx.google.com with ESMTPS id g1si7274690lag.18.2015.01.14.14.49.59 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 14 Jan 2015 14:49:59 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.42 as permitted sender) client-ip=209.85.215.42; Received: by mail-la0-f42.google.com with SMTP id gd6so10814410lab.1 for ; Wed, 14 Jan 2015 14:49:59 -0800 (PST) X-Received: by 10.112.14.6 with SMTP id l6mr6510488lbc.91.1421275799735; Wed, 14 Jan 2015 14:49:59 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.9.200 with SMTP id c8csp1845681lbb; Wed, 14 Jan 2015 14:49:58 -0800 (PST) X-Received: by 10.224.40.136 with SMTP id k8mr11249340qae.69.1421275797766; Wed, 14 Jan 2015 14:49:57 -0800 (PST) Received: from ip-10-35-177-41.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id p19si33096898qgd.27.2015.01.14.14.49.56 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 14 Jan 2015 14:49:57 -0800 (PST) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-35-177-41.ec2.internal) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YBWl8-0000LJ-ST; Wed, 14 Jan 2015 22:49:54 +0000 Received: from mail-la0-f43.google.com ([209.85.215.43]) by ip-10-35-177-41.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1YBWkf-0000Bh-RY for lng-odp@lists.linaro.org; Wed, 14 Jan 2015 22:49:26 +0000 Received: by mail-la0-f43.google.com with SMTP id s18so10763889lam.2 for ; Wed, 14 Jan 2015 14:49:20 -0800 (PST) X-Received: by 10.152.234.9 with SMTP id ua9mr6509947lac.44.1421275760151; Wed, 14 Jan 2015 14:49:20 -0800 (PST) Received: from localhost (c-853670d5.07-21-73746f28.cust.bredbandsbolaget.se. [213.112.54.133]) by mx.google.com with ESMTPSA id is5sm2160942lac.41.2015.01.14.14.49.19 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Jan 2015 14:49:19 -0800 (PST) From: Anders Roxell To: lng-odp@lists.linaro.org Date: Wed, 14 Jan 2015 23:48:26 +0100 Message-Id: <1421275706-11176-10-git-send-email-anders.roxell@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1421275706-11176-1-git-send-email-anders.roxell@linaro.org> References: <1421275706-11176-1-git-send-email-anders.roxell@linaro.org> X-Topics: patch Subject: [lng-odp] [PATCHv3 9/9] helper: linux: use cpumask in linux thread/proc X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: anders.roxell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.42 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 From: Robbie King Move away from specifying a core count and allow the user to specify a mask. Signed-off-by: Robbie King Signed-off-by: Anders Roxell --- example/generator/odp_generator.c | 48 +++++++++-------- example/ipsec/odp_ipsec.c | 30 +++++------ example/l2fwd/odp_l2fwd.c | 43 ++++++++------- example/packet/odp_pktio.c | 37 +++++++------ example/timer/odp_timer_test.c | 25 ++++----- helper/include/odph_linux.h | 17 +++--- platform/linux-generic/include/api/odp_queue.h | 6 +-- platform/linux-generic/odp_linux.c | 72 ++++++++++++++++---------- test/api_test/odp_common.c | 5 +- test/performance/odp_scheduling.c | 30 ++++------- test/validation/common/odp_cunit_common.c | 5 +- 11 files changed, 163 insertions(+), 155 deletions(-) diff --git a/example/generator/odp_generator.c b/example/generator/odp_generator.c index c3f1783..4b911a6 100644 --- a/example/generator/odp_generator.c +++ b/example/generator/odp_generator.c @@ -543,10 +543,10 @@ int main(int argc, char *argv[]) odp_buffer_pool_t pool; int num_workers; int i; - int first_cpu; - int cpu_count; odp_shm_t shm; + odp_cpumask_t cpumask; odp_buffer_pool_param_t params; + char cpumaskstr[64]; /* Init ODP before calling anything else */ if (odp_init_global(NULL, NULL)) { @@ -582,31 +582,25 @@ int main(int argc, char *argv[]) /* Print both system and application information */ print_info(NO_PATH(argv[0]), &args->appl); - cpu_count = odp_sys_cpu_count(); - num_workers = cpu_count; - + /* Default to system CPU count unless user specified */ + num_workers = MAX_WORKERS; if (args->appl.cpu_count) num_workers = args->appl.cpu_count; - if (num_workers > MAX_WORKERS) - num_workers = MAX_WORKERS; - /* ping mode need two worker */ if (args->appl.mode == APPL_MODE_PING) num_workers = 2; - printf("Num worker threads: %i\n", num_workers); - /* * By default CPU #0 runs Linux kernel background tasks. * Start mapping thread from CPU #1 */ - first_cpu = 1; - - if (cpu_count == 1) - first_cpu = 0; + num_workers = odph_linux_cpumask_default(&cpumask, num_workers); + odp_cpumask_to_str(&cpumask, cpumaskstr, sizeof(cpumaskstr)); - printf("First CPU: %i\n\n", first_cpu); + printf("num worker threads: %i\n", num_workers); + printf("first CPU: %i\n", odp_cpumask_first(&cpumask)); + printf("cpu mask: %s\n", cpumaskstr); /* Create packet pool */ params.buf_size = SHM_PKT_POOL_BUF_SIZE; @@ -629,28 +623,33 @@ int main(int argc, char *argv[]) memset(thread_tbl, 0, sizeof(thread_tbl)); if (args->appl.mode == APPL_MODE_PING) { + odp_cpumask_t cpu0_mask; + + /* Previous code forced both threads to CPU 0 */ + odp_cpumask_zero(&cpu0_mask); + odp_cpumask_set(&cpu0_mask, 0); + args->thread[1].pktio_dev = args->appl.if_names[0]; args->thread[1].pool = pool; args->thread[1].mode = args->appl.mode; - odph_linux_pthread_create(&thread_tbl[1], 1, 0, + odph_linux_pthread_create(&thread_tbl[1], &cpu0_mask, gen_recv_thread, &args->thread[1]); args->thread[0].pktio_dev = args->appl.if_names[0]; args->thread[0].pool = pool; args->thread[0].mode = args->appl.mode; - odph_linux_pthread_create(&thread_tbl[0], 1, 0, + odph_linux_pthread_create(&thread_tbl[0], &cpu0_mask, gen_send_thread, &args->thread[0]); /* only wait send thread to join */ num_workers = 1; } else { + int cpu = odp_cpumask_first(&cpumask); for (i = 0; i < num_workers; ++i) { + odp_cpumask_t thd_mask; void *(*thr_run_func) (void *); - int cpu; int if_idx; - cpu = (first_cpu + i) % cpu_count; - if_idx = i % args->appl.if_count; args->thread[i].pktio_dev = args->appl.if_names[if_idx]; @@ -670,9 +669,14 @@ int main(int argc, char *argv[]) * because each thread might get different arguments. * Calls odp_thread_create(cpu) for each thread */ - odph_linux_pthread_create(&thread_tbl[i], 1, - cpu, thr_run_func, + odp_cpumask_zero(&thd_mask); + odp_cpumask_set(&thd_mask, cpu); + odph_linux_pthread_create(&thread_tbl[i], + &thd_mask, + thr_run_func, &args->thread[i]); + cpu = odp_cpumask_next(&cpumask, cpu); + } } diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c index 7a0fbef..f2fac8a 100644 --- a/example/ipsec/odp_ipsec.c +++ b/example/ipsec/odp_ipsec.c @@ -1172,11 +1172,11 @@ main(int argc, char *argv[]) odph_linux_pthread_t thread_tbl[MAX_WORKERS]; int num_workers; int i; - int first_cpu; - int cpu_count; int stream_count; odp_shm_t shm; + odp_cpumask_t cpumask; odp_buffer_pool_param_t params; + char cpumaskstr[64]; /* Init ODP before calling anything else */ if (odp_init_global(NULL, NULL)) { @@ -1214,26 +1214,24 @@ main(int argc, char *argv[]) /* Print both system and application information */ print_info(NO_PATH(argv[0]), &args->appl); - cpu_count = odp_sys_cpu_count(); - num_workers = cpu_count; - + /* Default to system CPU count unless user specified */ + num_workers = MAX_WORKERS; if (args->appl.cpu_count) num_workers = args->appl.cpu_count; - if (num_workers > MAX_WORKERS) - num_workers = MAX_WORKERS; - - printf("Num worker threads: %i\n", num_workers); - - /* Create a barrier to synchronize thread startup */ - odp_barrier_init(&sync_barrier, num_workers); - /* * By default CPU #0 runs Linux kernel background tasks. * Start mapping thread from CPU #1 */ - first_cpu = (1 == cpu_count) ? 0 : 1; - printf("First CPU: %i\n\n", first_cpu); + num_workers = odph_linux_cpumask_default(&cpumask, num_workers); + odp_cpumask_to_str(&cpumask, cpumaskstr, sizeof(cpumaskstr)); + + printf("num worker threads: %i\n", num_workers); + printf("first CPU: %i\n", odp_cpumask_first(&cpumask)); + printf("cpu mask: %s\n", cpumaskstr); + + /* Create a barrier to synchronize thread startup */ + odp_barrier_init(&sync_barrier, num_workers); /* Create packet buffer pool */ params.buf_size = SHM_PKT_POOL_BUF_SIZE; @@ -1285,7 +1283,7 @@ main(int argc, char *argv[]) /* * Create and init worker threads */ - odph_linux_pthread_create(thread_tbl, num_workers, first_cpu, + odph_linux_pthread_create(thread_tbl, &cpumask, pktio_thread, NULL); /* diff --git a/example/l2fwd/odp_l2fwd.c b/example/l2fwd/odp_l2fwd.c index 209b0bd..10d5d32 100644 --- a/example/l2fwd/odp_l2fwd.c +++ b/example/l2fwd/odp_l2fwd.c @@ -288,11 +288,12 @@ int main(int argc, char *argv[]) odph_linux_pthread_t thread_tbl[MAX_WORKERS]; odp_buffer_pool_t pool; int i; - int first_cpu; - int cpu_count; + int cpu; int num_workers; odp_shm_t shm; + odp_cpumask_t cpumask; odp_buffer_pool_param_t params; + char cpumaskstr[64]; /* Init ODP before calling anything else */ if (odp_init_global(NULL, NULL)) { @@ -323,16 +324,21 @@ int main(int argc, char *argv[]) /* Print both system and application information */ print_info(NO_PATH(argv[0]), &gbl_args->appl); - cpu_count = odp_sys_cpu_count(); - num_workers = cpu_count; - + /* Default to system CPU count unless user specified */ + num_workers = MAX_WORKERS; if (gbl_args->appl.cpu_count) num_workers = gbl_args->appl.cpu_count; - if (num_workers > MAX_WORKERS) - num_workers = MAX_WORKERS; + /* + * By default CPU #0 runs Linux kernel background tasks. + * Start mapping thread from CPU #1 + */ + num_workers = odph_linux_cpumask_default(&cpumask, num_workers); + odp_cpumask_to_str(&cpumask, cpumaskstr, sizeof(cpumaskstr)); - printf("Num worker threads: %i\n", num_workers); + printf("num worker threads: %i\n", num_workers); + printf("first CPU: %i\n", odp_cpumask_first(&cpumask)); + printf("cpu mask: %s\n", cpumaskstr); if (num_workers < gbl_args->appl.if_count) { EXAMPLE_ERR("Error: CPU count %d less than interface count\n", @@ -344,16 +350,6 @@ int main(int argc, char *argv[]) gbl_args->appl.if_count); exit(EXIT_FAILURE); } - /* - * By default CPU #0 runs Linux kernel background tasks. - * Start mapping thread from CPU #1 - */ - first_cpu = 1; - - if (cpu_count == 1) - first_cpu = 0; - - printf("First cpu: %i\n\n", first_cpu); /* Create packet pool */ params.buf_size = SHM_PKT_POOL_BUF_SIZE; @@ -380,11 +376,10 @@ int main(int argc, char *argv[]) memset(thread_tbl, 0, sizeof(thread_tbl)); /* Create worker threads */ + cpu = odp_cpumask_first(&cpumask); for (i = 0; i < num_workers; ++i) { + odp_cpumask_t thd_mask; void *(*thr_run_func) (void *); - int cpu; - - cpu = (first_cpu + i) % cpu_count; if (gbl_args->appl.mode == APPL_MODE_PKT_BURST) thr_run_func = pktio_ifburst_thread; @@ -393,8 +388,12 @@ int main(int argc, char *argv[]) gbl_args->thread[i].src_idx = i % gbl_args->appl.if_count; - odph_linux_pthread_create(&thread_tbl[i], 1, cpu, thr_run_func, + odp_cpumask_zero(&thd_mask); + odp_cpumask_set(&thd_mask, cpu); + odph_linux_pthread_create(&thread_tbl[i], &thd_mask, + thr_run_func, &gbl_args->thread[i]); + cpu = odp_cpumask_next(&thd_mask, cpu); } /* Master thread waits for other threads to exit */ diff --git a/example/packet/odp_pktio.c b/example/packet/odp_pktio.c index 0d5918a..4a392af 100644 --- a/example/packet/odp_pktio.c +++ b/example/packet/odp_pktio.c @@ -279,9 +279,10 @@ int main(int argc, char *argv[]) odp_buffer_pool_t pool; int num_workers; int i; - int first_cpu; - int cpu_count; + int cpu; + odp_cpumask_t cpumask; odp_buffer_pool_param_t params; + char cpumaskstr[64]; args = calloc(1, sizeof(args_t)); if (args == NULL) { @@ -307,27 +308,21 @@ int main(int argc, char *argv[]) /* Print both system and application information */ print_info(NO_PATH(argv[0]), &args->appl); - cpu_count = odp_sys_cpu_count(); - num_workers = cpu_count; - + /* Default to system CPU count unless user specified */ + num_workers = MAX_WORKERS; if (args->appl.cpu_count) num_workers = args->appl.cpu_count; - if (num_workers > MAX_WORKERS) - num_workers = MAX_WORKERS; - - printf("Num worker threads: %i\n", num_workers); - /* * By default CPU #0 runs Linux kernel background tasks. * Start mapping thread from CPU #1 */ - first_cpu = 1; + num_workers = odph_linux_cpumask_default(&cpumask, num_workers); + odp_cpumask_to_str(&cpumask, cpumaskstr, sizeof(cpumaskstr)); - if (cpu_count == 1) - first_cpu = 0; - - printf("First CPU: %i\n\n", first_cpu); + printf("num worker threads: %i\n", num_workers); + printf("first CPU: %i\n", odp_cpumask_first(&cpumask)); + printf("cpu mask: %s\n", cpumaskstr); /* Create packet pool */ params.buf_size = SHM_PKT_POOL_BUF_SIZE; @@ -349,13 +344,13 @@ int main(int argc, char *argv[]) /* Create and init worker threads */ memset(thread_tbl, 0, sizeof(thread_tbl)); + + cpu = odp_cpumask_first(&cpumask); for (i = 0; i < num_workers; ++i) { + odp_cpumask_t thd_mask; void *(*thr_run_func) (void *); - int cpu; int if_idx; - cpu = (first_cpu + i) % cpu_count; - if_idx = i % args->appl.if_count; args->thread[i].pktio_dev = args->appl.if_names[if_idx]; @@ -370,8 +365,12 @@ int main(int argc, char *argv[]) * because each thread might get different arguments. * Calls odp_thread_create(cpu) for each thread */ - odph_linux_pthread_create(&thread_tbl[i], 1, cpu, thr_run_func, + odp_cpumask_zero(&thd_mask); + odp_cpumask_set(&thd_mask, cpu); + odph_linux_pthread_create(&thread_tbl[i], &thd_mask, + thr_run_func, &args->thread[i]); + cpu = odp_cpumask_next(&cpumask, cpu); } /* Master thread waits for other threads to exit */ diff --git a/example/timer/odp_timer_test.c b/example/timer/odp_timer_test.c index 5de499b..915d43d 100644 --- a/example/timer/odp_timer_test.c +++ b/example/timer/odp_timer_test.c @@ -300,12 +300,13 @@ int main(int argc, char *argv[]) test_args_t args; int num_workers; odp_queue_t queue; - int first_cpu; uint64_t cycles, ns; odp_queue_param_t param; odp_buffer_pool_param_t params; odp_timer_pool_param_t tparams; odp_timer_pool_info_t tpinfo; + odp_cpumask_t cpumask; + char cpumaskstr[64]; printf("\nODP timer example starts\n"); @@ -336,28 +337,22 @@ int main(int argc, char *argv[]) printf("\n"); - /* A worker thread per CPU */ - num_workers = odp_sys_cpu_count(); - + /* Default to system CPU count unless user specified */ + num_workers = MAX_WORKERS; if (args.cpu_count) num_workers = args.cpu_count; - /* force to max CPU count */ - if (num_workers > MAX_WORKERS) - num_workers = MAX_WORKERS; - - printf("num worker threads: %i\n", num_workers); - /* * By default CPU #0 runs Linux kernel background tasks. * Start mapping thread from CPU #1 */ - first_cpu = 1; + num_workers = odph_linux_cpumask_default(&cpumask, num_workers); + odp_cpumask_to_str(&cpumask, cpumaskstr, sizeof(cpumaskstr)); - if (odp_sys_cpu_count() == 1) - first_cpu = 0; + printf("num worker threads: %i\n", num_workers); + printf("first CPU: %i\n", odp_cpumask_first(&cpumask)); + printf("cpu mask: %s\n", cpumaskstr); - printf("first CPU: %i\n", first_cpu); printf("resolution: %i usec\n", args.resolution_us); printf("min timeout: %i usec\n", args.min_us); printf("max timeout: %i usec\n", args.max_us); @@ -444,7 +439,7 @@ int main(int argc, char *argv[]) odp_barrier_init(&test_barrier, num_workers); /* Create and launch worker threads */ - odph_linux_pthread_create(thread_tbl, num_workers, first_cpu, + odph_linux_pthread_create(thread_tbl, &cpumask, run_thread, &args); /* Wait for worker threads to exit */ diff --git a/helper/include/odph_linux.h b/helper/include/odph_linux.h index 6458fde..146e26c 100644 --- a/helper/include/odph_linux.h +++ b/helper/include/odph_linux.h @@ -31,7 +31,6 @@ extern "C" { typedef struct { void *(*start_routine) (void *); /**< The function to run */ void *arg; /**< The functions arguemnts */ - } odp_start_args_t; /** Linux pthread state information */ @@ -65,19 +64,16 @@ int odph_linux_cpumask_default(odp_cpumask_t *mask, int num); /** * Creates and launches pthreads * - * Creates, pins and launches num threads to separate CPU's starting from - * first_cpu. + * Creates, pins and launches threads to separate CPU's based on the cpumask. * * @param thread_tbl Thread table - * @param num Number of threads to create - * @param first_cpu First physical CPU + * @param mask CPU mask * @param start_routine Thread start function * @param arg Thread argument */ void odph_linux_pthread_create(odph_linux_pthread_t *thread_tbl, - int num, int first_cpu, - void *(*start_routine) (void *), void *arg); - + const odp_cpumask_t *mask, + void *(*start_routine) (void *), void *arg); /** * Waits pthreads to exit @@ -111,14 +107,13 @@ int odph_linux_process_fork(odph_linux_process_t *proc, int cpu); * Forks and sets CPU affinity for child processes * * @param proc_tbl Process state info table (for output) - * @param num Number of processes to create - * @param first_cpu Destination CPU for the first process + * @param mask CPU mask of processes to create * * @return On success: 1 for the parent, 0 for the child * On failure: -1 for the parent, -2 for the child */ int odph_linux_process_fork_n(odph_linux_process_t *proc_tbl, - int num, int first_cpu); + const odp_cpumask_t *mask); /** diff --git a/platform/linux-generic/include/api/odp_queue.h b/platform/linux-generic/include/api/odp_queue.h index af4379f..b0f7185 100644 --- a/platform/linux-generic/include/api/odp_queue.h +++ b/platform/linux-generic/include/api/odp_queue.h @@ -85,14 +85,14 @@ typedef int odp_schedule_sync_t; #define ODP_SCHED_SYNC_DEFAULT ODP_SCHED_SYNC_ATOMIC /** - * ODP schedule core group + * ODP schedule CPU group */ typedef int odp_schedule_group_t; -/** Group of all cores */ +/** Group of all CPUs */ #define ODP_SCHED_GROUP_ALL 0 -/** Default core group */ +/** Default CPU group */ #define ODP_SCHED_GROUP_DEFAULT ODP_SCHED_GROUP_ALL diff --git a/platform/linux-generic/odp_linux.c b/platform/linux-generic/odp_linux.c index a051024..84fee59 100644 --- a/platform/linux-generic/odp_linux.c +++ b/platform/linux-generic/odp_linux.c @@ -25,6 +25,8 @@ #include #include +#define MAX_WORKERS 32 + int odph_linux_cpumask_default(odp_cpumask_t *mask, int num_in) { int i; @@ -83,32 +85,41 @@ static void *odp_run_start_routine(void *arg) } -void odph_linux_pthread_create(odph_linux_pthread_t *thread_tbl, int num, - int first_cpu, +void odph_linux_pthread_create(odph_linux_pthread_t *thread_tbl, + const odp_cpumask_t *mask_in, void *(*start_routine) (void *), void *arg) { int i; - cpu_set_t cpu_set; + int num; + odp_cpumask_t mask; int cpu_count; int cpu; - cpu_count = odp_sys_cpu_count(); - - assert((first_cpu >= 0) && (first_cpu < cpu_count)); - assert((num >= 0) && (num <= cpu_count)); + odp_cpumask_copy(&mask, mask_in); + num = odp_cpumask_count(&mask); memset(thread_tbl, 0, num * sizeof(odph_linux_pthread_t)); + cpu_count = odp_sys_cpu_count(); + + if (num < 1 || num > cpu_count) { + ODP_ERR("Bad num\n"); + return; + } + + cpu = odp_cpumask_first(&mask); for (i = 0; i < num; i++) { + odp_cpumask_t thd_mask; + + odp_cpumask_zero(&thd_mask); + odp_cpumask_set(&thd_mask, cpu); + pthread_attr_init(&thread_tbl[i].attr); - cpu = (first_cpu + i) % cpu_count; thread_tbl[i].cpu = cpu; - CPU_ZERO(&cpu_set); - CPU_SET(cpu, &cpu_set); pthread_attr_setaffinity_np(&thread_tbl[i].attr, - sizeof(cpu_set_t), &cpu_set); + sizeof(cpu_set_t), &thd_mask.set); thread_tbl[i].start_args = malloc(sizeof(odp_start_args_t)); if (thread_tbl[i].start_args == NULL) @@ -119,6 +130,8 @@ void odph_linux_pthread_create(odph_linux_pthread_t *thread_tbl, int num, pthread_create(&thread_tbl[i].thread, &thread_tbl[i].attr, odp_run_start_routine, thread_tbl[i].start_args); + + cpu = odp_cpumask_next(&mask, cpu); } } @@ -137,30 +150,34 @@ void odph_linux_pthread_join(odph_linux_pthread_t *thread_tbl, int num) int odph_linux_process_fork_n(odph_linux_process_t *proc_tbl, - int num, int first_cpu) + const odp_cpumask_t *mask_in) { - cpu_set_t cpu_set; + odp_cpumask_t mask; pid_t pid; + int num; int cpu_count; int cpu; int i; - memset(proc_tbl, 0, num*sizeof(odph_linux_process_t)); + odp_cpumask_copy(&mask, mask_in); + num = odp_cpumask_count(&mask); - cpu_count = odp_sys_cpu_count(); + memset(proc_tbl, 0, num * sizeof(odph_linux_process_t)); - if (first_cpu < 0 || first_cpu >= cpu_count) { - ODP_ERR("Bad first_cpu\n"); - return -1; - } + cpu_count = odp_sys_cpu_count(); - if (num < 0 || num > cpu_count) { + if (num < 1 || num > cpu_count) { ODP_ERR("Bad num\n"); return -1; } + cpu = odp_cpumask_first(&mask); for (i = 0; i < num; i++) { - cpu = (first_cpu + i) % cpu_count; + odp_cpumask_t proc_mask; + + odp_cpumask_zero(&proc_mask); + odp_cpumask_set(&proc_mask, cpu); + pid = fork(); if (pid < 0) { @@ -172,14 +189,13 @@ int odph_linux_process_fork_n(odph_linux_process_t *proc_tbl, if (pid > 0) { proc_tbl[i].pid = pid; proc_tbl[i].cpu = cpu; + + cpu = odp_cpumask_next(&mask, cpu); continue; } /* Child process */ - CPU_ZERO(&cpu_set); - CPU_SET(cpu, &cpu_set); - - if (sched_setaffinity(0, sizeof(cpu_set_t), &cpu_set)) { + if (sched_setaffinity(0, sizeof(cpu_set_t), &proc_mask.set)) { ODP_ERR("sched_setaffinity() failed\n"); return -2; } @@ -198,7 +214,11 @@ int odph_linux_process_fork_n(odph_linux_process_t *proc_tbl, int odph_linux_process_fork(odph_linux_process_t *proc, int cpu) { - return odph_linux_process_fork_n(proc, 1, cpu); + odp_cpumask_t mask; + + odp_cpumask_zero(&mask); + odp_cpumask_set(&mask, cpu); + return odph_linux_process_fork_n(proc, &mask); } diff --git a/test/api_test/odp_common.c b/test/api_test/odp_common.c index 3ea815e..bce6f09 100644 --- a/test/api_test/odp_common.c +++ b/test/api_test/odp_common.c @@ -73,8 +73,11 @@ int odp_test_global_init(void) /** create test thread */ int odp_test_thread_create(void *func_ptr(void *), pthrd_arg *arg) { + odp_cpumask_t cpumask; + /* Create and init additional threads */ - odph_linux_pthread_create(thread_tbl, arg->numthrds, 0, func_ptr, + odph_linux_cpumask_default(&cpumask, arg->numthrds); + odph_linux_pthread_create(thread_tbl, &cpumask, func_ptr, (void *)arg); return 0; diff --git a/test/performance/odp_scheduling.c b/test/performance/odp_scheduling.c index 72656c4..bb005d7 100644 --- a/test/performance/odp_scheduling.c +++ b/test/performance/odp_scheduling.c @@ -810,14 +810,15 @@ int main(int argc, char *argv[]) odph_linux_pthread_t thread_tbl[MAX_WORKERS]; test_args_t args; int num_workers; + odp_cpumask_t cpumask; odp_buffer_pool_t pool; odp_queue_t queue; int i, j; int prios; - int first_cpu; odp_shm_t shm; test_globals_t *globals; odp_buffer_pool_param_t params; + char cpumaskstr[64]; printf("\nODP example starts\n\n"); @@ -857,29 +858,21 @@ int main(int argc, char *argv[]) printf("\n"); - /* A worker thread per CPU */ - num_workers = odp_sys_cpu_count(); - + /* Default to system CPU count unless user specified */ + num_workers = MAX_WORKERS; if (args.cpu_count) num_workers = args.cpu_count; - /* force to max CPU count */ - if (num_workers > MAX_WORKERS) - num_workers = MAX_WORKERS; - - printf("num worker threads: %i\n", num_workers); - /* * By default CPU #0 runs Linux kernel background tasks. * Start mapping thread from CPU #1 */ - first_cpu = 1; - - if (odp_sys_cpu_count() == 1) - first_cpu = 0; - - printf("first CPU: %i\n", first_cpu); + num_workers = odph_linux_cpumask_default(&cpumask, num_workers); + odp_cpumask_to_str(&cpumask, cpumaskstr, sizeof(cpumaskstr)); + printf("num worker threads: %i\n", num_workers); + printf("first CPU: %i\n", odp_cpumask_first(&cpumask)); + printf("cpu mask: %s\n", cpumaskstr); /* Test cycle count accuracy */ test_time(); @@ -968,8 +961,7 @@ int main(int argc, char *argv[]) odph_linux_process_t proc[MAX_WORKERS]; /* Fork worker processes */ - ret = odph_linux_process_fork_n(proc, num_workers, - first_cpu); + ret = odph_linux_process_fork_n(proc, &cpumask); if (ret < 0) { LOG_ERR("Fork workers failed %i\n", ret); @@ -987,7 +979,7 @@ int main(int argc, char *argv[]) } else { /* Create and launch worker threads */ - odph_linux_pthread_create(thread_tbl, num_workers, first_cpu, + odph_linux_pthread_create(thread_tbl, &cpumask, run_thread, NULL); /* Wait for worker threads to terminate */ diff --git a/test/validation/common/odp_cunit_common.c b/test/validation/common/odp_cunit_common.c index 2fab033..4d05b95 100644 --- a/test/validation/common/odp_cunit_common.c +++ b/test/validation/common/odp_cunit_common.c @@ -20,8 +20,11 @@ static odph_linux_pthread_t thread_tbl[MAX_WORKERS]; /** create test thread */ int odp_cunit_thread_create(void *func_ptr(void *), pthrd_arg *arg) { + odp_cpumask_t cpumask; + /* Create and init additional threads */ - odph_linux_pthread_create(thread_tbl, arg->numthrds, 0, func_ptr, + odph_linux_cpumask_default(&cpumask, arg->numthrds); + odph_linux_pthread_create(thread_tbl, &cpumask, func_ptr, (void *)arg); return 0;