diff mbox series

[2/3] kcov: prefault the kcov_area

Message ID 20180504135535.53744-3-mark.rutland@arm.com
State New
Headers show
Series kcov: fix unexpected faults | expand

Commit Message

Mark Rutland May 4, 2018, 1:55 p.m. UTC
On many architectures the vmalloc area is lazily faulted in upon first
access. This is problematic for KCOV, as __sanitizer_cov_trace_pc
accesses the (vmalloc'd) kcov_area, and fault handling code may be
instrumented. If an access to kcov_area faults, this will result in
mutual recursion through the fault handling code and
__sanitizer_cov_trace_pc(), eventually leading to stack corruption
and/or overflow.

We can avoid this by faulting in the kcov_area before
__sanitizer_cov_trace_pc() is permitted to access it. Once it has been
faulted in, it will remain present in the process page tables, and will
not fault again.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
---
 kernel/kcov.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

-- 
2.11.0

Comments

Andrey Ryabinin May 4, 2018, 2:36 p.m. UTC | #1
On 05/04/2018 04:55 PM, Mark Rutland wrote:

>  

> +static void kcov_fault_in_area(struct kcov *kcov)

> +{

> +	unsigned long stride = PAGE_SIZE / sizeof(unsigned long);

> +	unsigned long *area = kcov->area;

> +	unsigned long offset;

> +

> +	for (offset = 0; offset < kcov->size; offset += stride) {

> +		READ_ONCE(area[offset]);

> +	}


Usually we don't use {} for a single statement blocks.

> +}

> +
Mark Rutland May 4, 2018, 2:38 p.m. UTC | #2
On Fri, May 04, 2018 at 05:36:49PM +0300, Andrey Ryabinin wrote:
> 

> 

> On 05/04/2018 04:55 PM, Mark Rutland wrote:

> 

> >  

> > +static void kcov_fault_in_area(struct kcov *kcov)

> > +{

> > +	unsigned long stride = PAGE_SIZE / sizeof(unsigned long);

> > +	unsigned long *area = kcov->area;

> > +	unsigned long offset;

> > +

> > +	for (offset = 0; offset < kcov->size; offset += stride) {

> > +		READ_ONCE(area[offset]);

> > +	}

> 

> Usually we don't use {} for a single statement blocks.

> 

> > +}

> > +


I'll remove the braces.

Can I consider that an ack, otherwise? :)

Thanks,
Mark.
Andrey Ryabinin May 4, 2018, 2:42 p.m. UTC | #3
On 05/04/2018 05:38 PM, Mark Rutland wrote:
> On Fri, May 04, 2018 at 05:36:49PM +0300, Andrey Ryabinin wrote:

>>

>>

>> On 05/04/2018 04:55 PM, Mark Rutland wrote:

>>

>>>  

>>> +static void kcov_fault_in_area(struct kcov *kcov)

>>> +{

>>> +	unsigned long stride = PAGE_SIZE / sizeof(unsigned long);

>>> +	unsigned long *area = kcov->area;

>>> +	unsigned long offset;

>>> +

>>> +	for (offset = 0; offset < kcov->size; offset += stride) {

>>> +		READ_ONCE(area[offset]);

>>> +	}

>>

>> Usually we don't use {} for a single statement blocks.

>>

>>> +}

>>> +

> 

> I'll remove the braces.

> 

> Can I consider that an ack, otherwise? :)


Yes, ack for all 3 patches.


> 

> Thanks,

> Mark.

>
Andrew Morton May 8, 2018, 10:51 p.m. UTC | #4
On Fri,  4 May 2018 14:55:34 +0100 Mark Rutland <mark.rutland@arm.com> wrote:

> On many architectures the vmalloc area is lazily faulted in upon first

> access. This is problematic for KCOV, as __sanitizer_cov_trace_pc

> accesses the (vmalloc'd) kcov_area, and fault handling code may be

> instrumented. If an access to kcov_area faults, this will result in

> mutual recursion through the fault handling code and

> __sanitizer_cov_trace_pc(), eventually leading to stack corruption

> and/or overflow.

> 

> We can avoid this by faulting in the kcov_area before

> __sanitizer_cov_trace_pc() is permitted to access it. Once it has been

> faulted in, it will remain present in the process page tables, and will

> not fault again.

> 

> ...

>

> --- a/kernel/kcov.c

> +++ b/kernel/kcov.c

> @@ -324,6 +324,17 @@ static int kcov_close(struct inode *inode, struct file *filep)

>  	return 0;

>  }

>  

> +static void kcov_fault_in_area(struct kcov *kcov)


It would be nice to have a comment here explaining why the function
exists.

umm, this?

--- a/kernel/kcov.c~kcov-prefault-the-kcov_area-fix-fix
+++ a/kernel/kcov.c
@@ -324,6 +324,10 @@ static int kcov_close(struct inode *inod
 	return 0;
 }
 
+/*
+ * fault in a lazily-faulted vmalloc area, to avoid recursion issues if the
+ * vmalloc fault handler itself is instrumented.
+ */
 static void kcov_fault_in_area(struct kcov *kcov)
 {
 	unsigned long stride = PAGE_SIZE / sizeof(unsigned long);

> +{

> +	unsigned long stride = PAGE_SIZE / sizeof(unsigned long);

> +	unsigned long *area = kcov->area;

> +	unsigned long offset;

> +

> +	for (offset = 0; offset < kcov->size; offset += stride) {

> +		READ_ONCE(area[offset]);

> +	}

> +}
Mark Rutland May 9, 2018, 9:41 a.m. UTC | #5
On Tue, May 08, 2018 at 03:51:58PM -0700, Andrew Morton wrote:
> On Fri,  4 May 2018 14:55:34 +0100 Mark Rutland <mark.rutland@arm.com> wrote:

> 

> > On many architectures the vmalloc area is lazily faulted in upon first

> > access. This is problematic for KCOV, as __sanitizer_cov_trace_pc

> > accesses the (vmalloc'd) kcov_area, and fault handling code may be

> > instrumented. If an access to kcov_area faults, this will result in

> > mutual recursion through the fault handling code and

> > __sanitizer_cov_trace_pc(), eventually leading to stack corruption

> > and/or overflow.

> > 

> > We can avoid this by faulting in the kcov_area before

> > __sanitizer_cov_trace_pc() is permitted to access it. Once it has been

> > faulted in, it will remain present in the process page tables, and will

> > not fault again.

> > 

> > ...

> >

> > --- a/kernel/kcov.c

> > +++ b/kernel/kcov.c

> > @@ -324,6 +324,17 @@ static int kcov_close(struct inode *inode, struct file *filep)

> >  	return 0;

> >  }

> >  

> > +static void kcov_fault_in_area(struct kcov *kcov)

> 

> It would be nice to have a comment here explaining why the function

> exists.

> 

> umm, this?

> 

> --- a/kernel/kcov.c~kcov-prefault-the-kcov_area-fix-fix

> +++ a/kernel/kcov.c

> @@ -324,6 +324,10 @@ static int kcov_close(struct inode *inod

>  	return 0;

>  }

>  

> +/*

> + * fault in a lazily-faulted vmalloc area, to avoid recursion issues if the

> + * vmalloc fault handler itself is instrumented.

> + */


Sounds good to me. Perhaps we want a little more detail, e.g.

/*
 * Fault in a lazily-faulted vmalloc area before it can be used by
 * __santizer_cov_trace_pc(), to avoid recursion issues if any code on
 * the vmalloc fault handling path is instrumented.
 */

>  static void kcov_fault_in_area(struct kcov *kcov)


I also think it might make sense to rename this to kcov_prefault_area(),
so that this doesn't sound like a fault handler, but that's not a big
deal.

Thanks for handling this!

Mark.
diff mbox series

Patch

diff --git a/kernel/kcov.c b/kernel/kcov.c
index 5be9a60a959f..3b82f8e258da 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -324,6 +324,17 @@  static int kcov_close(struct inode *inode, struct file *filep)
 	return 0;
 }
 
+static void kcov_fault_in_area(struct kcov *kcov)
+{
+	unsigned long stride = PAGE_SIZE / sizeof(unsigned long);
+	unsigned long *area = kcov->area;
+	unsigned long offset;
+
+	for (offset = 0; offset < kcov->size; offset += stride) {
+		READ_ONCE(area[offset]);
+	}
+}
+
 static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
 			     unsigned long arg)
 {
@@ -372,6 +383,7 @@  static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
 #endif
 		else
 			return -EINVAL;
+		kcov_fault_in_area(kcov);
 		/* Cache in task struct for performance. */
 		t->kcov_size = kcov->size;
 		t->kcov_area = kcov->area;