From patchwork Mon Apr 7 15:04:25 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean Pihet X-Patchwork-Id: 27912 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f198.google.com (mail-ve0-f198.google.com [209.85.128.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9CE4620490 for ; Mon, 7 Apr 2014 15:11:24 +0000 (UTC) Received: by mail-ve0-f198.google.com with SMTP id oz11sf19202130veb.1 for ; Mon, 07 Apr 2014 08:11:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=RlqF4QDQWAQAYdlTEI7Q+hXN/X6eywF8nzSvKyzMuT8=; b=XPFWoMrei4uG7nGNstW+kEs1QoJQk79Izyuxx6+H1Cl/EtZxY3D6vDDh71YOh4MYGZ uJAKODipQ/aBUCZYJaMQJBNPPuRKo0UMpLitQRpEnMzRfodDZTdL5QwdrkZYh8FzkL6U gzYVL0dHpJmfHU690to/evx3rXCFRPEs2wZsbhy+HviGEGaR0hAy1ir54wsuaz8VF95S td1rvUG3ya67D4djcwOLOrhJS+tBgqO+4poKMznLSJOtE3SdJKbljctDrzAnGVX9VXI+ JBy0y+5L6EZW7brbti7BozZlqafKYv8UN1WgM8eIewmcg8fNiPe0lrdEsfj4f/ruCj81 NrHw== X-Gm-Message-State: ALoCoQmMqJTiBQYdN28Vii0lnF6ga96XcK2GtBlQNIxJv3nxX1BVHICJTVxKlCVRX7Fsh7MOEF39 X-Received: by 10.224.113.202 with SMTP id b10mr16461699qaq.3.1396883484116; Mon, 07 Apr 2014 08:11:24 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.80.145 with SMTP id c17ls1808554qgd.60.gmail; Mon, 07 Apr 2014 08:11:24 -0700 (PDT) X-Received: by 10.220.147.16 with SMTP id j16mr13571505vcv.14.1396883484036; Mon, 07 Apr 2014 08:11:24 -0700 (PDT) Received: from mail-ve0-f176.google.com (mail-ve0-f176.google.com [209.85.128.176]) by mx.google.com with ESMTPS id cm9si3181722vcb.46.2014.04.07.08.11.24 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 07 Apr 2014 08:11:24 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.128.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.176; Received: by mail-ve0-f176.google.com with SMTP id db11so3584303veb.21 for ; Mon, 07 Apr 2014 08:11:24 -0700 (PDT) X-Received: by 10.58.161.101 with SMTP id xr5mr625614veb.36.1396883483946; Mon, 07 Apr 2014 08:11:23 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.12.8 with SMTP id v8csp170121vcv; Mon, 7 Apr 2014 08:11:23 -0700 (PDT) X-Received: by 10.68.170.66 with SMTP id ak2mr31526717pbc.5.1396883482721; Mon, 07 Apr 2014 08:11:22 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x5si5095542pax.146.2014.04.07.08.11.22; Mon, 07 Apr 2014 08:11:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755824AbaDGPLN (ORCPT + 27 others); Mon, 7 Apr 2014 11:11:13 -0400 Received: from mail-ee0-f48.google.com ([74.125.83.48]:65441 "EHLO mail-ee0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755049AbaDGPFC (ORCPT ); Mon, 7 Apr 2014 11:05:02 -0400 Received: by mail-ee0-f48.google.com with SMTP id b57so734336eek.21 for ; Mon, 07 Apr 2014 08:05:01 -0700 (PDT) X-Received: by 10.14.87.7 with SMTP id x7mr3419791eee.44.1396883101033; Mon, 07 Apr 2014 08:05:01 -0700 (PDT) Received: from localhost.localdomain (201-179-62-37.mobileinternet.proximus.be. [37.62.179.201]) by mx.google.com with ESMTPSA id o4sm42287649eef.20.2014.04.07.08.04.58 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 07 Apr 2014 08:05:00 -0700 (PDT) From: Jean Pihet To: Borislav Petkov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Jiri Olsa , linux-kernel@vger.kernel.org, Robert Richter Cc: Robert Richter , Jean Pihet Subject: [PATCH 03/16] perf, mmap: Factor out perf_alloc/free_rb() Date: Mon, 7 Apr 2014 17:04:25 +0200 Message-Id: <1396883078-25320-4-git-send-email-jean.pihet@linaro.org> X-Mailer: git-send-email 1.7.11.7 In-Reply-To: <1396883078-25320-1-git-send-email-jean.pihet@linaro.org> References: <1396883078-25320-1-git-send-email-jean.pihet@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: jean.pihet@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.176 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Robert Richter Factor out code to allocate and deallocate ringbuffers. We need this later to setup the sampling buffer for persistent events. While at this, replacing get_current_user() with get_uid(user). Signed-off-by: Robert Richter Signed-off-by: Robert Richter Signed-off-by: Jean Pihet --- kernel/events/core.c | 77 +++++++++++++++++++++++++++++------------------- kernel/events/internal.h | 3 ++ 2 files changed, 50 insertions(+), 30 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 5eaba42..22ec8f0 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3193,7 +3193,45 @@ static void free_event_rcu(struct rcu_head *head) } static void ring_buffer_put(struct ring_buffer *rb); +static void ring_buffer_attach(struct perf_event *event, struct ring_buffer *rb); static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb); +static void perf_event_init_userpage(struct perf_event *event); + +/* + * Must be called with &event->mmap_mutex held. event->rb must be + * NULL. perf_alloc_rb() requires &event->mmap_count to be incremented + * on success which corresponds to &rb->mmap_count that is initialized + * with 1. + */ +int perf_alloc_rb(struct perf_event *event, int nr_pages, int flags) +{ + struct ring_buffer *rb; + + rb = rb_alloc(nr_pages, + event->attr.watermark ? event->attr.wakeup_watermark : 0, + event->cpu, flags); + if (!rb) + return -ENOMEM; + + atomic_set(&rb->mmap_count, 1); + ring_buffer_attach(event, rb); + rcu_assign_pointer(event->rb, rb); + + perf_event_init_userpage(event); + perf_event_update_userpage(event); + + return 0; +} + +/* Must be called with &event->mmap_mutex held. event->rb must be set. */ +void perf_free_rb(struct perf_event *event) +{ + struct ring_buffer *rb = event->rb; + + rcu_assign_pointer(event->rb, NULL); + ring_buffer_detach(event, rb); + ring_buffer_put(rb); +} static void unaccount_event_cpu(struct perf_event *event, int cpu) { @@ -3246,6 +3284,7 @@ static void __free_event(struct perf_event *event) call_rcu(&event->rcu_head, free_event_rcu); } + static void free_event(struct perf_event *event) { irq_work_sync(&event->pending); @@ -3253,8 +3292,6 @@ static void free_event(struct perf_event *event) unaccount_event(event); if (event->rb) { - struct ring_buffer *rb; - /* * Can happen when we close an event with re-directed output. * @@ -3262,12 +3299,8 @@ static void free_event(struct perf_event *event) * over us; possibly making our ring_buffer_put() the last. */ mutex_lock(&event->mmap_mutex); - rb = event->rb; - if (rb) { - rcu_assign_pointer(event->rb, NULL); - ring_buffer_detach(event, rb); - ring_buffer_put(rb); /* could be last */ - } + if (event->rb) + perf_free_rb(event); mutex_unlock(&event->mmap_mutex); } @@ -3901,11 +3934,8 @@ again: * still restart the iteration to make sure we're not now * iterating the wrong list. */ - if (event->rb == rb) { - rcu_assign_pointer(event->rb, NULL); - ring_buffer_detach(event, rb); - ring_buffer_put(rb); /* can't be last, we still have one */ - } + if (event->rb == rb) + perf_free_rb(event); mutex_unlock(&event->mmap_mutex); put_event(event); @@ -4041,7 +4071,6 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) unsigned long user_locked, user_lock_limit; struct user_struct *user = current_user(); unsigned long locked, lock_limit; - struct ring_buffer *rb; unsigned long vma_size; unsigned long nr_pages; long user_extra, extra; @@ -4125,28 +4154,16 @@ again: if (vma->vm_flags & VM_WRITE) flags |= RING_BUFFER_WRITABLE; - rb = rb_alloc(nr_pages, - event->attr.watermark ? event->attr.wakeup_watermark : 0, - event->cpu, flags); - - if (!rb) { - ret = -ENOMEM; + ret = perf_alloc_rb(event, nr_pages, flags); + if (ret) goto unlock; - } - atomic_set(&rb->mmap_count, 1); - rb->mmap_locked = extra; - rb->mmap_user = get_current_user(); + event->rb->mmap_locked = extra; + event->rb->mmap_user = get_uid(user); atomic_long_add(user_extra, &user->locked_vm); vma->vm_mm->pinned_vm += extra; - ring_buffer_attach(event, rb); - rcu_assign_pointer(event->rb, rb); - - perf_event_init_userpage(event); - perf_event_update_userpage(event); - unlock: if (!ret) atomic_inc(&event->mmap_count); diff --git a/kernel/events/internal.h b/kernel/events/internal.h index 3bd89d4..e9007ff 100644 --- a/kernel/events/internal.h +++ b/kernel/events/internal.h @@ -207,4 +207,7 @@ static inline void put_event(struct perf_event *event) __put_event(event); } +extern int perf_alloc_rb(struct perf_event *event, int nr_pages, int flags); +extern void perf_free_rb(struct perf_event *event); + #endif /* _KERNEL_EVENTS_INTERNAL_H */