diff mbox

[4/4] perf core: Add backward attribute to perf event

Message ID 1459147292-239310-5-git-send-email-wangnan0@huawei.com
State Superseded
Headers show

Commit Message

Wang Nan March 28, 2016, 6:41 a.m. UTC
This patch introduces 'write_backward' bit to perf_event_attr, which
controls the direction of a ring buffer. After set, the corresponding
ring buffer is written from end to beginning. This feature is design to
support reading from overwritable ring buffer.

Ring buffer can be created by mapping a perf event fd. Kernel puts event
records into ring buffer, user programs like perf fetch them from
address returned by mmap(). To prevent racing between kernel and perf,
they communicate to each other through 'head' and 'tail' pointers.
Kernel maintains 'head' pointer, points it to the next free area (tail
of the last record). Perf maintains 'tail' pointer, points it to the
tail of last consumed record (record has already been fetched). Kernel
determines the available space in a ring buffer using these two
pointers, prevents to overwrite unfetched records.

By mapping without 'PROT_WRITE', an overwritable ring buffer is created.
Different from normal ring buffer, perf is unable to maintain 'tail'
pointer because writing is forbidden. Therefore, for this type of ring
buffers, kernel overwrite old records unconditionally, works like flight
recorder. This feature would be useful if reading from overwritable ring
buffer were as easy as reading from normal ring buffer. However,
there's an obscure problem.

The following figure demonstrates the state of an overwritable ring
buffer which is nearly full. In this figure, the 'head' pointer points
to the end of last record, and a long record 'E' is pending. For a
normal ring buffer, a 'tail' pointer would have pointed to position (X),
so kernel knows there's no more space in the ring buffer. However, for
an overwritable ring buffer, kernel doesn't care the 'tail' pointer.

   (X)                              head
    .                                |
    .                                V
    +------+-------+----------+------+---+
    |A....A|B.....B|C........C|D....D|   |
    +------+-------+----------+------+---+

After writing record 'E', record 'A' is overwritten.

      head
       |
       V
    +--+---+-------+----------+------+---+
    |.E|..A|B.....B|C........C|D....D|E..|
    +--+---+-------+----------+------+---+

Now perf decides to read from this ring buffer. However, none of the
the two natural positions, 'head' and the start of this ring buffer,
are pointing to the head of a record. Even perf can read the full ring
buffer, it is unable to find the position to start decoding.

The first attempt tries to solve this problem AFAIK can be found from
[1]. It makes kernel to maintain 'tail' pointer: updates it when ring
buffer is half full. However, this approach introduces overhead to
fast path. Test result shows a 1% overhead [2]. In addition, this method
utilizes no more tham 50% records.

Another attempt can be found from [3], which allow putting the size of
an event at the end of each record. This approach allows perf to find
records in a backword manner from 'head' pointer by reading size of a
record from its tail. However, because of alignment requirement, it
needs 8 bytes to record the size of a record, which is a huge waste. Its
performance is also not good, because more data need to be written.
This approach also introduces some extra branch instructions to fast
path.

'write_backward' is a better solution to this problem.

Following figure demonstrates the state of the overwritable ring buffer
when 'write_backward' is set before overwriting:

       head
        |
        V
    +---+------+----------+-------+------+
    |   |D....D|C........C|B.....B|A....A|
    +---+------+----------+-------+------+

and after overwriting:
                                     head
                                      |
                                      V
    +---+------+----------+-------+---+--+
    |..E|D....D|C........C|B.....B|A..|E.|
    +---+------+----------+-------+---+--+

In each situation, 'head' points to the beginning of the newest record.
From this record, perf can iterate over the full ring buffer, fetching
as mush records as possible one by one.

The only limitation needs to consider is back-to-back reading. Due to
the non-deterministic of user program, it is impossible to ensure the
ring buffer keeps stable during reading. Consider an extreme situation:
perf is scheduled out after reading record 'D', then a burst of events
come, eat up the whole ring buffer (one or multiple rounds), but 'head'
pointer happends to be at the same position when perf comes back.
Continue reading after 'D' is incorrect now.

To prevent this problem, we need to find a way to ensure the ring buffer
is stable during reading. ioctl(PERF_EVENT_IOC_PAUSE_OUTPUT) is
suggested because its overhead is lower than
ioctl(PERF_EVENT_IOC_ENABLE).

This patch utilizes event's default overflow_handler introduced
previously. perf_event_output_backward() is created as the default
overflow handler for backward ring buffers. To avoid extra overhead to
fast path, original perf_event_output() becomes __perf_event_output()
and marked '__always_inline'. In theory, there's no extra overhead
introduced to fast path.

Performance result:

Calling 3000000 times of 'close(-1)', use gettimeofday() to check
duration.  Use 'perf record -o /dev/null -e raw_syscalls:*' to capture
system calls. In ns.

Testing environment:

 CPU    : Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
 Kernel : v4.5.0
                   MEAN         STDVAR
BASE            800214.950    2853.083
PRE1           2253846.700    9997.014
PRE2           2257495.540    8516.293
POST           2250896.100    8933.921

Where 'BASE' is pure performance without capturing. 'PRE1' is test
result of pure 'v4.5.0' kernel. 'PRE2' is test result before this
patch. 'POST' is test result after this patch. See [4] for detail
experimental setup.

Considering the stdvar, this patch doesn't introduce performance
overhead to fast path.

[1] http://lkml.iu.edu/hypermail/linux/kernel/1304.1/04584.html
[2] http://lkml.iu.edu/hypermail/linux/kernel/1307.1/00535.html
[3] http://lkml.iu.edu/hypermail/linux/kernel/1512.0/01265.html
[4] http://lkml.kernel.org/g/56F89DCD.1040202@huawei.com

Signed-off-by: Wang Nan <wangnan0@huawei.com>

Cc: He Kuang <hekuang@huawei.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
---
 include/linux/perf_event.h      | 28 +++++++++++++++++++++---
 include/uapi/linux/perf_event.h |  3 ++-
 kernel/events/core.c            | 48 ++++++++++++++++++++++++++++++++++++-----
 kernel/events/ring_buffer.c     | 14 ++++++++++++
 4 files changed, 84 insertions(+), 9 deletions(-)

-- 
1.8.3.4

Comments

Wang Nan March 29, 2016, 2:01 a.m. UTC | #1
On 2016/3/28 14:41, Wang Nan wrote:

[SNIP]

>

> To prevent this problem, we need to find a way to ensure the ring buffer

> is stable during reading. ioctl(PERF_EVENT_IOC_PAUSE_OUTPUT) is

> suggested because its overhead is lower than

> ioctl(PERF_EVENT_IOC_ENABLE).

>


Add comment:

By carefully verifying 'header' pointer, reader can avoid pausing the
ring-buffer. For example:

     /* A union of all possible events */
     union perf_event event;

     p = head = perf_mmap__read_head();
     while (true) {
         /* copy header of next event */
         fetch(&event.header, p, sizeof(event.header));

         /* read 'head' pointer */
         head = perf_mmap__read_head();

         /* check overwritten: is the header good? */
         if (!verify(sizeof(event.header), p, head))
             break;

         /* copy the whole event */
         fetch(&event, p, event.header.size);

         /* read 'head' pointer again */
         head = perf_mmap__read_head();

         /* is the whole event good? */
         if (!verify(event.header.size, p, head))
             break;
         p += event.header.size;
     }

However, the overhead is high because:

  a) In-place decoding is unsafe. Copy-verifying-decode is required.
  b) Fetching 'head' pointer requires additional synchronization.
Wang Nan March 29, 2016, 5:59 a.m. UTC | #2
On 2016/3/29 12:59, Alexei Starovoitov wrote:
> On Tue, Mar 29, 2016 at 10:01:24AM +0800, Wangnan (F) wrote:

>>

>> On 2016/3/28 14:41, Wang Nan wrote:

>>

>> [SNIP]

>>

>>> To prevent this problem, we need to find a way to ensure the ring buffer

>>> is stable during reading. ioctl(PERF_EVENT_IOC_PAUSE_OUTPUT) is

>>> suggested because its overhead is lower than

>>> ioctl(PERF_EVENT_IOC_ENABLE).

>>>

>> Add comment:

>>

>> By carefully verifying 'header' pointer, reader can avoid pausing the

>> ring-buffer. For example:

>>

>>      /* A union of all possible events */

>>      union perf_event event;

>>

>>      p = head = perf_mmap__read_head();

>>      while (true) {

>>          /* copy header of next event */

>>          fetch(&event.header, p, sizeof(event.header));

>>

>>          /* read 'head' pointer */

>>          head = perf_mmap__read_head();

>>

>>          /* check overwritten: is the header good? */

>>          if (!verify(sizeof(event.header), p, head))

>>              break;

>>

>>          /* copy the whole event */

>>          fetch(&event, p, event.header.size);

>>

>>          /* read 'head' pointer again */

>>          head = perf_mmap__read_head();

>>

>>          /* is the whole event good? */

>>          if (!verify(event.header.size, p, head))

>>              break;

>>          p += event.header.size;

>>      }

>>

>> However, the overhead is high because:

>>

>>   a) In-place decoding is unsafe. Copy-verifying-decode is required.

>>   b) Fetching 'head' pointer requires additional synchronization.

> Such trick may work, but pause is needed for more than stability

> of reading. When we collect the events into overwrite buffer

> we're waiting for some other trigger (like all cpu utilization

> spike or just one cpu running and all others are idle) and when

> it happens the buffer has valuable info from the past. At this

> point new events are no longer interesting and buffer should

> be paused, events read and unpaused until next trigger comes.


Agree. I just want to provide an alternative method.
I'm trying to finger out pausing is not mandatory
but highly recommended in man page and commit
messages.

Thank you.
Wang Nan March 30, 2016, 2:28 a.m. UTC | #3
On 2016/3/29 22:04, Peter Zijlstra wrote:
> On Mon, Mar 28, 2016 at 06:41:32AM +0000, Wang Nan wrote:

>

> Could you maybe write a perf/tests thingy for this so that _some_

> userspace exists that exercises this new code?

>

>

>>   int perf_output_begin(struct perf_output_handle *handle,

>>   		      struct perf_event *event, unsigned int size)

>>   {

>> +	if (unlikely(is_write_backward(event)))

>> +		return __perf_output_begin(handle, event, size, true);

>>   	return __perf_output_begin(handle, event, size, false);

>>   }

> Would something like:

>

> int perf_output_begin(...)

> {

> 	if (unlikely(is_write_backward(event))

> 		return perf_output_begin_backward(...);

> 	return perf_output_begin_forward(...);

> }

>

> make sense; I'm not sure how much is still using this, but it seems

> somewhat excessive to inline two copies of that thing into a single

> function.


perf_output_begin is useful:

$ grep perf_output_begin ./kernel -r
./kernel/events/ring_buffer.c:     * See perf_output_begin().
./kernel/events/ring_buffer.c:int perf_output_begin(struct 
perf_output_handle *handle,
./kernel/events/ring_buffer.c:     * perf_output_begin() only checks 
rb->paused, therefore
./kernel/events/core.c:    if (perf_output_begin(&handle, event, 
header.size))
./kernel/events/core.c:    ret = perf_output_begin(&handle, event, 
read_event.header.size);
./kernel/events/core.c:    ret = perf_output_begin(&handle, event,
./kernel/events/core.c:    ret = perf_output_begin(&handle, event,
./kernel/events/core.c:    ret = perf_output_begin(&handle, event,
./kernel/events/core.c:    ret = perf_output_begin(&handle, event, 
rec.header.size);
./kernel/events/core.c:    ret = perf_output_begin(&handle, event,
./kernel/events/core.c:    ret = perf_output_begin(&handle, event, 
se->event_id.header.size);
./kernel/events/core.c:    ret = perf_output_begin(&handle, event,
./kernel/events/core.c:    ret = perf_output_begin(&handle, event, 
rec.header.size);

Events like PERF_RECORD_MMAP2 uses this function, so we still need to 
consider its overhead.

So I will use your first suggestion.

Thank you.
Wang Nan March 30, 2016, 2:38 a.m. UTC | #4
On 2016/3/30 10:28, Wangnan (F) wrote:
>

>

> On 2016/3/29 22:04, Peter Zijlstra wrote:

>> On Mon, Mar 28, 2016 at 06:41:32AM +0000, Wang Nan wrote:

>>

>> Could you maybe write a perf/tests thingy for this so that _some_

>> userspace exists that exercises this new code?

>>

>>

>>>   int perf_output_begin(struct perf_output_handle *handle,

>>>                 struct perf_event *event, unsigned int size)

>>>   {

>>> +    if (unlikely(is_write_backward(event)))

>>> +        return __perf_output_begin(handle, event, size, true);

>>>       return __perf_output_begin(handle, event, size, false);

>>>   }

>> Would something like:

>>

>> int perf_output_begin(...)

>> {

>>     if (unlikely(is_write_backward(event))

>>         return perf_output_begin_backward(...);

>>     return perf_output_begin_forward(...);

>> }

>>

>> make sense; I'm not sure how much is still using this, but it seems

>> somewhat excessive to inline two copies of that thing into a single

>> function.

>

> perf_output_begin is useful:

>

> $ grep perf_output_begin ./kernel -r

> ./kernel/events/ring_buffer.c:     * See perf_output_begin().

> ./kernel/events/ring_buffer.c:int perf_output_begin(struct 

> perf_output_handle *handle,

> ./kernel/events/ring_buffer.c:     * perf_output_begin() only checks 

> rb->paused, therefore

> ./kernel/events/core.c:    if (perf_output_begin(&handle, event, 

> header.size))

> ./kernel/events/core.c:    ret = perf_output_begin(&handle, event, 

> read_event.header.size);

> ./kernel/events/core.c:    ret = perf_output_begin(&handle, event,

> ./kernel/events/core.c:    ret = perf_output_begin(&handle, event,

> ./kernel/events/core.c:    ret = perf_output_begin(&handle, event,

> ./kernel/events/core.c:    ret = perf_output_begin(&handle, event, 

> rec.header.size);

> ./kernel/events/core.c:    ret = perf_output_begin(&handle, event,

> ./kernel/events/core.c:    ret = perf_output_begin(&handle, event, 

> se->event_id.header.size);

> ./kernel/events/core.c:    ret = perf_output_begin(&handle, event,

> ./kernel/events/core.c:    ret = perf_output_begin(&handle, event, 

> rec.header.size);

>

> Events like PERF_RECORD_MMAP2 uses this function, so we still need to 

> consider its overhead.

>

> So I will use your first suggestion.

>


Sorry. Your second suggestion seems also good:

My implementation makes a big perf_output_begin(), but introduces only 
one load and one branch.

Your first suggestion introduces one load, one branch and one function call.

Your second suggestion introduces one load, and at least one (and at 
most three) branches.

I need some benchmarking result.

Thank you.
Wang Nan April 5, 2016, 2:05 p.m. UTC | #5
On 2016/3/30 10:38, Wangnan (F) wrote:
>

>

> On 2016/3/30 10:28, Wangnan (F) wrote:

>>

>>

>> On 2016/3/29 22:04, Peter Zijlstra wrote:

>>> On Mon, Mar 28, 2016 at 06:41:32AM +0000, Wang Nan wrote:

>>>

>>> Could you maybe write a perf/tests thingy for this so that _some_

>>> userspace exists that exercises this new code?

>>>

>>>

>>>>   int perf_output_begin(struct perf_output_handle *handle,

>>>>                 struct perf_event *event, unsigned int size)

>>>>   {

>>>> +    if (unlikely(is_write_backward(event)))

>>>> +        return __perf_output_begin(handle, event, size, true);

>>>>       return __perf_output_begin(handle, event, size, false);

>>>>   }

>>> Would something like:

>>>

>>> int perf_output_begin(...)

>>> {

>>>     if (unlikely(is_write_backward(event))

>>>         return perf_output_begin_backward(...);

>>>     return perf_output_begin_forward(...);

>>> }

>>>

>>> make sense; I'm not sure how much is still using this, but it seems

>>> somewhat excessive to inline two copies of that thing into a single

>>> function.

>>

>>


[SNIP]

>

> Sorry. Your second suggestion seems also good:

>

> My implementation makes a big perf_output_begin(), but introduces only 

> one load and one branch.

>

> Your first suggestion introduces one load, one branch and one function 

> call.

>

> Your second suggestion introduces one load, and at least one (and at 

> most three) branches.

>

> I need some benchmarking result.

>

> Thank you.


No obviously performance divergence among all 3 implementations.

Here are some numbers:

I tested the cost of generating PERF_RECORD_COMM event using prctl with
following code:

         ...
         gettimeofday(&tv1, NULL);
         for (i = 0; i < 1000 * 1000 * 3; i++) {
                 char proc_name[10];

                 snprintf(proc_name, sizeof(proc_name), "p:%d\n", i);
                 prctl(PR_SET_NAME, proc_name);
         }
         gettimeofday(&tv2, NULL);
         us1 = tv1.tv_sec * 1000000 + tv1.tv_usec;
         us2 = tv2.tv_sec * 1000000 + tv2.tv_usec;
         printf("%ld\n", us2 - us1);
         ...

Run this benchmark 100 time in each experiment. Bind benchmark to core 2
and perf to core 1 to ensure they are on a same CPU.

Result:

BASE    : execute without perf
4.5     : pure v4.5
TIP     : with only patch 1-3/4 in this patch set applied
BIGFUNC : the implementation in my original patch
FUNCCALL: the implememtation in Peter's first suggestion:
    int perf_output_begin(...)
    {
        if (unlikely(is_write_backward(event))
            return perf_output_begin_backward(...);
        return perf_output_begin_forward(...);
    }
BRANCH : the implememtation in Peter's second suggestion:
     int perf_output_begin(...)
     {
         return __perf_output_begin(..., unlikely(event->attr.backwards));
     }


'perf' is executed using:
  # perf record -o /dev/null --no-buildid-cache -e 
syscalls:sys_enter_read ...


Results:

              MEAN       STDVAR
BASE    : 1122968.85   33492.52
4.5     : 2714200.70   26231.69
TIP     : 2646260.46   32610.56
BIGFUNC : 2661308.46   52707.47
FUNCCALL: 2636061.10   52607.80
BRANCH  : 2651335.74   34910.04


Considering the stdvar, the performance result is nearly identical.

I'd like to choose 'BRANCH' because its code looks better.

Thank you.
Wang Nan April 7, 2016, 9:45 a.m. UTC | #6
On 2016/3/29 22:04, Peter Zijlstra wrote:
> On Mon, Mar 28, 2016 at 06:41:32AM +0000, Wang Nan wrote:

>

> Could you maybe write a perf/tests thingy for this so that _some_

> userspace exists that exercises this new code?

>

>


Yes. Please see:

http://lkml.kernel.org/r/1460022180-61262-1-git-send-email-wangnan0@huawei.com

Thank you.
diff mbox

Patch

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 4065ca2..0cc36ad 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -834,14 +834,24 @@  extern int perf_event_overflow(struct perf_event *event,
 				 struct perf_sample_data *data,
 				 struct pt_regs *regs);
 
+extern void perf_event_output_forward(struct perf_event *event,
+				     struct perf_sample_data *data,
+				     struct pt_regs *regs);
+extern void perf_event_output_backward(struct perf_event *event,
+				       struct perf_sample_data *data,
+				       struct pt_regs *regs);
 extern void perf_event_output(struct perf_event *event,
-				struct perf_sample_data *data,
-				struct pt_regs *regs);
+			      struct perf_sample_data *data,
+			      struct pt_regs *regs);
 
 static inline bool
 is_default_overflow_handler(struct perf_event *event)
 {
-	return (event->overflow_handler == perf_event_output);
+	if (likely(event->overflow_handler == perf_event_output_forward))
+		return true;
+	if (unlikely(event->overflow_handler == perf_event_output_backward))
+		return true;
+	return false;
 }
 
 extern void
@@ -1042,8 +1052,20 @@  static inline bool has_aux(struct perf_event *event)
 	return event->pmu->setup_aux;
 }
 
+static inline bool is_write_backward(struct perf_event *event)
+{
+	return !!event->attr.write_backward;
+}
+
 extern int perf_output_begin(struct perf_output_handle *handle,
 			     struct perf_event *event, unsigned int size);
+extern int perf_output_begin_forward(struct perf_output_handle *handle,
+				    struct perf_event *event,
+				    unsigned int size);
+extern int perf_output_begin_backward(struct perf_output_handle *handle,
+				      struct perf_event *event,
+				      unsigned int size);
+
 extern void perf_output_end(struct perf_output_handle *handle);
 extern unsigned int perf_output_copy(struct perf_output_handle *handle,
 			     const void *buf, unsigned int len);
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index a3c1903..43fc8d2 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -340,7 +340,8 @@  struct perf_event_attr {
 				comm_exec      :  1, /* flag comm events that are due to an exec */
 				use_clockid    :  1, /* use @clockid for time fields */
 				context_switch :  1, /* context switch data */
-				__reserved_1   : 37;
+				write_backward :  1, /* Write ring buffer from end to beginning */
+				__reserved_1   : 36;
 
 	union {
 		__u32		wakeup_events;	  /* wakeup every n events */
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 3bd4b2b..41a2614 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5641,9 +5641,13 @@  void perf_prepare_sample(struct perf_event_header *header,
 	}
 }
 
-void perf_event_output(struct perf_event *event,
-			struct perf_sample_data *data,
-			struct pt_regs *regs)
+static void __always_inline
+__perf_event_output(struct perf_event *event,
+		    struct perf_sample_data *data,
+		    struct pt_regs *regs,
+		    int (*output_begin)(struct perf_output_handle *,
+					struct perf_event *,
+					unsigned int))
 {
 	struct perf_output_handle handle;
 	struct perf_event_header header;
@@ -5653,7 +5657,7 @@  void perf_event_output(struct perf_event *event,
 
 	perf_prepare_sample(&header, data, event, regs);
 
-	if (perf_output_begin(&handle, event, header.size))
+	if (output_begin(&handle, event, header.size))
 		goto exit;
 
 	perf_output_sample(&handle, &header, data, event);
@@ -5664,6 +5668,30 @@  exit:
 	rcu_read_unlock();
 }
 
+void
+perf_event_output_forward(struct perf_event *event,
+			 struct perf_sample_data *data,
+			 struct pt_regs *regs)
+{
+	__perf_event_output(event, data, regs, perf_output_begin_forward);
+}
+
+void
+perf_event_output_backward(struct perf_event *event,
+			   struct perf_sample_data *data,
+			   struct pt_regs *regs)
+{
+	__perf_event_output(event, data, regs, perf_output_begin_backward);
+}
+
+void
+perf_event_output(struct perf_event *event,
+		  struct perf_sample_data *data,
+		  struct pt_regs *regs)
+{
+	__perf_event_output(event, data, regs, perf_output_begin);
+}
+
 /*
  * read event_id
  */
@@ -8017,8 +8045,11 @@  perf_event_alloc(struct perf_event_attr *attr, int cpu,
 	if (overflow_handler) {
 		event->overflow_handler	= overflow_handler;
 		event->overflow_handler_context = context;
+	} else if (is_write_backward(event)){
+		event->overflow_handler = perf_event_output_backward;
+		event->overflow_handler_context = NULL;
 	} else {
-		event->overflow_handler = perf_event_output;
+		event->overflow_handler = perf_event_output_forward;
 		event->overflow_handler_context = NULL;
 	}
 
@@ -8253,6 +8284,13 @@  perf_event_set_output(struct perf_event *event, struct perf_event *output_event)
 		goto out;
 
 	/*
+	 * Either writing ring buffer from beginning or from end.
+	 * Mixing is not allowed.
+	 */
+	if (is_write_backward(output_event) != is_write_backward(event))
+		goto out;
+
+	/*
 	 * If both events generate aux data, they must be on the same PMU
 	 */
 	if (has_aux(event) && has_aux(output_event) &&
diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
index b2c7c15..8e6c4b5 100644
--- a/kernel/events/ring_buffer.c
+++ b/kernel/events/ring_buffer.c
@@ -230,9 +230,23 @@  out:
 	return -ENOSPC;
 }
 
+int perf_output_begin_forward(struct perf_output_handle *handle,
+			     struct perf_event *event, unsigned int size)
+{
+	return __perf_output_begin(handle, event, size, false);
+}
+
+int perf_output_begin_backward(struct perf_output_handle *handle,
+			       struct perf_event *event, unsigned int size)
+{
+	return __perf_output_begin(handle, event, size, true);
+}
+
 int perf_output_begin(struct perf_output_handle *handle,
 		      struct perf_event *event, unsigned int size)
 {
+	if (unlikely(is_write_backward(event)))
+		return __perf_output_begin(handle, event, size, true);
 	return __perf_output_begin(handle, event, size, false);
 }