From patchwork Wed Jul 30 11:10:24 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Shukla X-Patchwork-Id: 34511 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f70.google.com (mail-yh0-f70.google.com [209.85.213.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 96C6620792 for ; Wed, 30 Jul 2014 11:10:38 +0000 (UTC) Received: by mail-yh0-f70.google.com with SMTP id b6sf3596427yha.5 for ; Wed, 30 Jul 2014 04:10:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:mime-version:date:message-id:from :to:subject:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:errors-to:sender :x-original-sender:x-original-authentication-results:mailing-list :content-type:content-transfer-encoding; bh=4EL/HKTXvNITNLTF0/xEbnY1rp2CPURR7AdU/7KEFjU=; b=IEllZ9qeFDdB0WEzOI3Gmha3yN9VSe9/DmNwJp/fYGNLiXA+HD+3oOHSQaS0uUNhqv G/LCPBWMSV49NPb7C6osaMXoiiaMXKVAodWXzk3EaORxPar2rrKrZUOTkUEF48O7gkHF 0roR/JdZqmd0pMIfu0YLcJ7iwJCv1Dmwc+/n7vL3DYm9oZ6wC290Ta3f8lTj/En42PQ3 NhEyH++12oTLDQ+kDKQo5EiMRIhNuBsu4q5FoRT3yfo27LPbsCDWDCmLCpQHJ/r5FbFz 57VnlA+Z5wzJJpfPyzygDUIXkYVLVRu7mCFVdr1fEZs85ekzWSWNKYoLNgnUyDU2qfoQ GEFw== X-Gm-Message-State: ALoCoQl+TZNmirLjdMSZP0XNi1O5UAC0tQidorkEaiGx1zu1E+7vX0j+tbfIxMxrO2PZ+CqSr9cN X-Received: by 10.224.14.82 with SMTP id f18mr811231qaa.2.1406718638392; Wed, 30 Jul 2014 04:10:38 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.51.237 with SMTP id u100ls460058qga.36.gmail; Wed, 30 Jul 2014 04:10:38 -0700 (PDT) X-Received: by 10.52.120.83 with SMTP id la19mr1155906vdb.68.1406718638312; Wed, 30 Jul 2014 04:10:38 -0700 (PDT) Received: from mail-vc0-f170.google.com (mail-vc0-f170.google.com [209.85.220.170]) by mx.google.com with ESMTPS id q6si1271011veh.50.2014.07.30.04.10.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 30 Jul 2014 04:10:38 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.170 as permitted sender) client-ip=209.85.220.170; Received: by mail-vc0-f170.google.com with SMTP id lf12so1490537vcb.1 for ; Wed, 30 Jul 2014 04:10:38 -0700 (PDT) X-Received: by 10.220.200.71 with SMTP id ev7mr3297814vcb.24.1406718638227; Wed, 30 Jul 2014 04:10:38 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp18624vcb; Wed, 30 Jul 2014 04:10:37 -0700 (PDT) X-Received: by 10.140.26.110 with SMTP id 101mr5245852qgu.1.1406718637469; Wed, 30 Jul 2014 04:10:37 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id 32si3262724qga.23.2014.07.30.04.10.36 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 30 Jul 2014 04:10:37 -0700 (PDT) Received-SPF: none (google.com: lng-odp-bounces@lists.linaro.org does not designate permitted sender hosts) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1XCRmG-0003pC-A6; Wed, 30 Jul 2014 11:10:36 +0000 Received: from mail-we0-f176.google.com ([74.125.82.176]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1XCRmA-0003p7-MN for lng-odp@lists.linaro.org; Wed, 30 Jul 2014 11:10:30 +0000 Received: by mail-we0-f176.google.com with SMTP id q58so1029735wes.35 for ; Wed, 30 Jul 2014 04:10:25 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.194.189.230 with SMTP id gl6mr5196815wjc.118.1406718624160; Wed, 30 Jul 2014 04:10:24 -0700 (PDT) Received: by 10.194.93.41 with HTTP; Wed, 30 Jul 2014 04:10:24 -0700 (PDT) Date: Wed, 30 Jul 2014 16:40:24 +0530 Message-ID: From: Santosh Shukla To: lng-odp-forward Subject: [lng-odp] [BUG] wrong thread_id entry in thread_tbl[hereee!!!] for few odp applications X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: lng-odp-bounces@lists.linaro.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: santosh.shukla@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.170 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Hi, I noticed that few example odp application (l2fwd, generator, pktio and odp_timer_ping) likely to store wrong thread_id because of the way they were using odp_linux_pthread_create() api. Example: Refer file example/l2fwd/odp_l2fwd.c, odp_generator.c and odp_time_ping.c code snippet look like this for (i = 0; i < num_workers; ++i) { ............... ............. odp_linux_pthread_create(thread_tbl, 1, core, thr_run_func, &gbl_args->thread[i]); } The above will overwrite thread_tbl[0] for each iteration. So if you print thread_tbl[] array, you will find only thread_tbl[0] with valid value and rest with zero. Problem is inner for loop of odp_linux_pthread_create() refills entry in thread_tbl starting from index 0 till max number (arg3 above) In my isolation case as well other cases like generator, l2fwd app, don't want odp_linux_pthread_create to fill thread_tbl entry starting from 0 everytime. Specific to my isolation use -case I want to create two isolated thread affine to any-many core provided by isolation application.. ie.. ./odp_isolation cpu_isolist = 4,6,10. where cpu_isolist = create the thread and affine to cpus Therefore I am proposing one more api set for odp_linux_pthread_craete where I am explicitly bypassing inner for loop and avoiding thread_tbl flush, I have tested this api for existing application like odp_timer_ping, it works fine and Should work fine for other application ie.. generator, l2fwd application. Pasting a snap here before sending out formal patch in list. ---------------- --------------- diff --git a/include/helper/odp_linux.h b/include/helper/odp_linux.h index 3076c22..f59819c 100644 --- a/include/helper/odp_linux.h +++ b/include/helper/odp_linux.h @@ -48,6 +48,11 @@ void odp_linux_pthread_create(odp_linux_pthread_t *thread_tbl, int num, int first_core, void *(*start_routine) (void *), void *arg); +void odp_linux_pthread_create_single(odp_linux_pthread_t *thread_tbl, + int thd_tbl_idx, + int first_core, + void *(*start_routine) (void *), void *arg); + /** * Waits pthreads to exit diff --git a/platform/linux-generic/odp_linux.c b/platform/linux-generic/odp_linux.c index 6e2b448..f10dfa1 100644 --- a/platform/linux-generic/odp_linux.c +++ b/platform/linux-generic/odp_linux.c @@ -85,6 +85,53 @@ void odp_linux_pthread_create(odp_linux_pthread_t *thread_tbl, int num, } +void odp_linux_pthread_create_single(odp_linux_pthread_t *thread_tbl, int thd_tbl_idx, + int first_core, void *(*start_routine) (void *), void *arg) +{ + int i; + cpu_set_t cpu_set; + odp_start_args_t *start_args; + int core_count; + int cpu; + + core_count = odp_sys_core_count(); + + assert((first_core >= 0) && (first_core < core_count)); + int core_count; + int cpu; + + core_count = odp_sys_core_count(); + + assert((first_core >= 0) && (first_core < core_count)); + assert((thd_tbl_idx >= 0) && (thd_tbl_idx <= core_count)); + + /* check thread_tbl idx is free for use? */ + if(thread_tbl[thd_tbl_idx].thread != 0) { + //ODP_ERR("thread_tbl idx[%d] used\n", thd_tbl_idx); + printf("thread_tbl idx[%d] used\n", thd_tbl_idx); + return; + } + + /* flush that tbl_idx only */ + memset(thread_tbl+thd_tbl_idx, 0, sizeof(odp_linux_pthread_t)); + i = thd_tbl_idx; + /* now create a thread, affine to first_core */ + pthread_attr_init(&thread_tbl[i].attr); + + CPU_ZERO(&cpu_set); + + cpu = first_core % core_count; + CPU_SET(cpu, &cpu_set); + + pthread_attr_setaffinity_np(&thread_tbl[i].attr, + sizeof(cpu_set_t), &cpu_set); + + start_args = malloc(sizeof(odp_start_args_t)); + memset(start_args, 0, sizeof(odp_start_args_t)); + start_args->start_routine = start_routine; + start_args->arg = arg; + + start_args->thr_id = odp_thread_create(cpu); + + pthread_create(&thread_tbl[i].thread, &thread_tbl[i].attr, + odp_run_start_routine, start_args); +} +