diff mbox series

[v5,2/3] net: add kcov handle to skb extensions

Message ID 20201029173620.2121359-3-aleksandrnogikh@gmail.com
State New
Headers show
Series [v5,1/3] kernel: make kcov_common_handle consider the current context | expand

Commit Message

Aleksandr Nogikh Oct. 29, 2020, 5:36 p.m. UTC
From: Aleksandr Nogikh <nogikh@google.com>

Remote KCOV coverage collection enables coverage-guided fuzzing of the
code that is not reachable during normal system call execution. It is
especially helpful for fuzzing networking subsystems, where it is
common to perform packet handling in separate work queues even for the
packets that originated directly from the user space.

Enable coverage-guided frame injection by adding kcov remote handle to
skb extensions. Default initialization in __alloc_skb and
__build_skb_around ensures that no socket buffer that was generated
during a system call will be missed.

Code that is of interest and that performs packet processing should be
annotated with kcov_remote_start()/kcov_remote_stop().

An alternative approach is to determine kcov_handle solely on the
basis of the device/interface that received the specific socket
buffer. However, in this case it would be impossible to distinguish
between packets that originated during normal background network
processes or were intentionally injected from the user space.

Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
Acked-by: Willem de Bruijn <willemb@google.com>
---
v3 -> v4:
* CONFIG_SKB_EXTENSIONS is now automatically selected by CONFIG_KCOV.
* Elaborated on a minor optimization in skb_set_kcov_handle().
v2 -> v3:
* Reimplemented this change. Now kcov handle is added to skb
extensions instead of sk_buff.
v1 -> v2:
* Updated the commit message.
---
 include/linux/skbuff.h | 36 ++++++++++++++++++++++++++++++++++++
 lib/Kconfig.debug      |  1 +
 net/core/skbuff.c      | 11 +++++++++++
 3 files changed, 48 insertions(+)

Comments

Ido Schimmel Nov. 21, 2020, 4:09 p.m. UTC | #1
+ Florian

On Thu, Oct 29, 2020 at 05:36:19PM +0000, Aleksandr Nogikh wrote:
> From: Aleksandr Nogikh <nogikh@google.com>
> 
> Remote KCOV coverage collection enables coverage-guided fuzzing of the
> code that is not reachable during normal system call execution. It is
> especially helpful for fuzzing networking subsystems, where it is
> common to perform packet handling in separate work queues even for the
> packets that originated directly from the user space.
> 
> Enable coverage-guided frame injection by adding kcov remote handle to
> skb extensions. Default initialization in __alloc_skb and
> __build_skb_around ensures that no socket buffer that was generated
> during a system call will be missed.
> 
> Code that is of interest and that performs packet processing should be
> annotated with kcov_remote_start()/kcov_remote_stop().
> 
> An alternative approach is to determine kcov_handle solely on the
> basis of the device/interface that received the specific socket
> buffer. However, in this case it would be impossible to distinguish
> between packets that originated during normal background network
> processes or were intentionally injected from the user space.
> 
> Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
> Acked-by: Willem de Bruijn <willemb@google.com>

[...]

> @@ -249,6 +249,9 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
>  
>  		fclones->skb2.fclone = SKB_FCLONE_CLONE;
>  	}
> +
> +	skb_set_kcov_handle(skb, kcov_common_handle());

Hi,

This causes skb extensions to be allocated for the allocated skb, but
there are instances that blindly overwrite 'skb->extensions' by invoking
skb_copy_header() after __alloc_skb(). For example, skb_copy(),
__pskb_copy_fclone() and skb_copy_expand(). This results in the skb
extensions being leaked [1].

One possible solution is to try to patch all these instances with
skb_ext_put() before skb_copy_header().

Another possible solution is to convert skb_copy_header() to use
skb_ext_copy() instead of __skb_ext_copy(). It will first drop the
reference on the skb extensions of the new skb, but it assumes that
'skb->active_extensions' is valid. This is not the case in the
skb_clone() path so we should probably zero this field in __skb_clone().

Other suggestions?

Thanks

[1]
BUG: memory leak
unreferenced object 0xffff888027f9a490 (size 16):
  comm "syz-executor.0", pid 1155, jiffies 4295996826 (age 66.927s)
  hex dump (first 16 bytes):
    01 00 00 00 01 02 6b 6b 01 00 00 00 00 00 00 00  ......kk........
  backtrace:
    [<0000000005a5f2c4>] kmemleak_alloc_recursive include/linux/kmemleak.h:43 [inline]
    [<0000000005a5f2c4>] slab_post_alloc_hook mm/slab.h:528 [inline]
    [<0000000005a5f2c4>] slab_alloc_node mm/slub.c:2891 [inline]
    [<0000000005a5f2c4>] slab_alloc mm/slub.c:2899 [inline]
    [<0000000005a5f2c4>] kmem_cache_alloc+0x173/0x800 mm/slub.c:2904
    [<00000000c5e43ea9>] __skb_ext_alloc+0x22/0x90 net/core/skbuff.c:6173
    [<000000000de35e81>] skb_ext_add+0x230/0x4a0 net/core/skbuff.c:6268
    [<000000003b7efba4>] skb_set_kcov_handle include/linux/skbuff.h:4622 [inline]
    [<000000003b7efba4>] skb_set_kcov_handle include/linux/skbuff.h:4612 [inline]
    [<000000003b7efba4>] __alloc_skb+0x47f/0x6a0 net/core/skbuff.c:253
    [<000000007f789b23>] skb_copy+0x151/0x310 net/core/skbuff.c:1512
    [<000000001ce26864>] mlxsw_emad_transmit+0x4e/0x620 drivers/net/ethernet/mellanox/mlxsw/core.c:585
    [<000000005c732123>] mlxsw_emad_reg_access drivers/net/ethernet/mellanox/mlxsw/core.c:829 [inline]
    [<000000005c732123>] mlxsw_core_reg_access_emad+0xda8/0x1770 drivers/net/ethernet/mellanox/mlxsw/core.c:2408
    [<00000000c07840b3>] mlxsw_core_reg_access+0x101/0x7f0 drivers/net/ethernet/mellanox/mlxsw/core.c:2583
    [<000000007c47f30f>] mlxsw_reg_write+0x30/0x40 drivers/net/ethernet/mellanox/mlxsw/core.c:2603
    [<00000000675e3fc7>] mlxsw_sp_port_admin_status_set+0x8a7/0x980 drivers/net/ethernet/mellanox/mlxsw/spectrum.c:300
    [<00000000fefe35a4>] mlxsw_sp_port_stop+0x63/0x70 drivers/net/ethernet/mellanox/mlxsw/spectrum.c:537
    [<00000000c41390e8>] __dev_close_many+0x1c7/0x300 net/core/dev.c:1607
    [<00000000628c5987>] __dev_close net/core/dev.c:1619 [inline]
    [<00000000628c5987>] __dev_change_flags+0x2b9/0x710 net/core/dev.c:8421
    [<000000008cc810c6>] dev_change_flags+0x97/0x170 net/core/dev.c:8494
    [<0000000053274a78>] do_setlink+0xa5b/0x3b80 net/core/rtnetlink.c:2706
    [<00000000e4085785>] rtnl_group_changelink net/core/rtnetlink.c:3225 [inline]
    [<00000000e4085785>] __rtnl_newlink+0xe06/0x17d0 net/core/rtnetlink.c:3379
Florian Westphal Nov. 21, 2020, 4:52 p.m. UTC | #2
Ido Schimmel <idosch@idosch.org> wrote:
> On Thu, Oct 29, 2020 at 05:36:19PM +0000, Aleksandr Nogikh wrote:
> > From: Aleksandr Nogikh <nogikh@google.com>
> > 
> > Remote KCOV coverage collection enables coverage-guided fuzzing of the
> > code that is not reachable during normal system call execution. It is
> > especially helpful for fuzzing networking subsystems, where it is
> > common to perform packet handling in separate work queues even for the
> > packets that originated directly from the user space.
> > 
> > Enable coverage-guided frame injection by adding kcov remote handle to
> > skb extensions. Default initialization in __alloc_skb and
> > __build_skb_around ensures that no socket buffer that was generated
> > during a system call will be missed.
> > 
> > Code that is of interest and that performs packet processing should be
> > annotated with kcov_remote_start()/kcov_remote_stop().
> > 
> > An alternative approach is to determine kcov_handle solely on the
> > basis of the device/interface that received the specific socket
> > buffer. However, in this case it would be impossible to distinguish
> > between packets that originated during normal background network
> > processes or were intentionally injected from the user space.
> > 
> > Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
> > Acked-by: Willem de Bruijn <willemb@google.com>
> 
> [...]
> 
> > @@ -249,6 +249,9 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
> >  
> >  		fclones->skb2.fclone = SKB_FCLONE_CLONE;
> >  	}
> > +
> > +	skb_set_kcov_handle(skb, kcov_common_handle());
> 
> Hi,
> 
> This causes skb extensions to be allocated for the allocated skb, but
> there are instances that blindly overwrite 'skb->extensions' by invoking
> skb_copy_header() after __alloc_skb(). For example, skb_copy(),
> __pskb_copy_fclone() and skb_copy_expand(). This results in the skb
> extensions being leaked [1].

[..]
> Other suggestions?

Aleksandr, why was this made into an skb extension in the first place?

AFAIU this feature is usually always disabled at build time.
For debug builds (test farm /debug kernel etc) its always needed.

If thats the case this u64 should be an sk_buff member, not an
extension.
Johannes Berg Nov. 21, 2020, 5:39 p.m. UTC | #3
On Sat, 2020-11-21 at 17:52 +0100, Florian Westphal wrote:
> 
> Aleksandr, why was this made into an skb extension in the first place?
> 
> AFAIU this feature is usually always disabled at build time.
> For debug builds (test farm /debug kernel etc) its always needed.
> 
> If thats the case this u64 should be an sk_buff member, not an
> extension.

Because it was requested :-)

https://lore.kernel.org/netdev/20201009161558.57792e1a@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com/

johannes
Jakub Kicinski Nov. 21, 2020, 6:06 p.m. UTC | #4
On Sat, 21 Nov 2020 17:52:27 +0100 Florian Westphal wrote:
> Ido Schimmel <idosch@idosch.org> wrote:
> > Other suggestions?  
> 
> Aleksandr, why was this made into an skb extension in the first place?
> 
> AFAIU this feature is usually always disabled at build time.
> For debug builds (test farm /debug kernel etc) its always needed.
> 
> If thats the case this u64 should be an sk_buff member, not an
> extension.

Yeah, in hindsight I should have looked at how it's used. Not a great
fit for extensions. We can go back, but...

In general I'm not very happy at how this is going. First of all just
setting the handle in a couple of allocs seems to not be enough, skbs
get cloned, reused etc. There were also build problems caused by this
patch and Aleksandr & co where nowhere to be found. Now we find out
this causes leaks, how was that not caught by the syzbot it's supposed
to serve?!

So I'm leaning towards reverting the whole thing. You can attach
kretprobes and record the information you need in BPF maps.
Johannes Berg Nov. 21, 2020, 6:12 p.m. UTC | #5
On Sat, 2020-11-21 at 10:06 -0800, Jakub Kicinski wrote:
> On Sat, 21 Nov 2020 17:52:27 +0100 Florian Westphal wrote:
> > Ido Schimmel <idosch@idosch.org> wrote:
> > > Other suggestions?  
> > 
> > Aleksandr, why was this made into an skb extension in the first place?
> > 
> > AFAIU this feature is usually always disabled at build time.
> > For debug builds (test farm /debug kernel etc) its always needed.
> > 
> > If thats the case this u64 should be an sk_buff member, not an
> > extension.
> 
> Yeah, in hindsight I should have looked at how it's used. Not a great
> fit for extensions. We can go back, but...
> 
> In general I'm not very happy at how this is going. First of all just
> setting the handle in a couple of allocs seems to not be enough, skbs
> get cloned, reused etc. There were also build problems caused by this
> patch and Aleksandr & co where nowhere to be found. Now we find out
> this causes leaks, how was that not caught by the syzbot it's supposed
> to serve?!

Heh.

> So I'm leaning towards reverting the whole thing. You can attach
> kretprobes and record the information you need in BPF maps.

I'm not going to object to reverting it (and perhaps redoing it better
later), but I will point out that kretprobe isn't going to work, you
eventually need kcov_remote_start() to be called in strategic points
before processing the skb after it bounced through the system.

IOW, it's not really about serving userland, it's about enabling (and
later disabling) coverage collection for the bits of code it cares
about, mostly because collecting it for _everything_ is going to be too
slow and will mess up the data since for coverage guided fuzzing you
really need the reported coverage data to be only about the injected
fuzz data...

johannes
Jakub Kicinski Nov. 21, 2020, 6:35 p.m. UTC | #6
On Sat, 21 Nov 2020 19:12:21 +0100 Johannes Berg wrote:
> > So I'm leaning towards reverting the whole thing. You can attach
> > kretprobes and record the information you need in BPF maps.  
> 
> I'm not going to object to reverting it (and perhaps redoing it better
> later), but I will point out that kretprobe isn't going to work, you
> eventually need kcov_remote_start() to be called in strategic points
> before processing the skb after it bounced through the system.
> 
> IOW, it's not really about serving userland, it's about enabling (and
> later disabling) coverage collection for the bits of code it cares
> about, mostly because collecting it for _everything_ is going to be too
> slow and will mess up the data since for coverage guided fuzzing you
> really need the reported coverage data to be only about the injected
> fuzz data...

All you need is make kcov_remote_start_common() be BPF-able, like 
the LSM hooks are now, right? And then BPF can return whatever handle 
it pleases.

Or if you don't like BPF or what to KCOV BPF itself in the future you
can roll your own mechanism. The point is - this should be relatively
easily doable out of line...
Johannes Berg Nov. 21, 2020, 7:30 p.m. UTC | #7
On Sat, 2020-11-21 at 10:35 -0800, Jakub Kicinski wrote:
> On Sat, 21 Nov 2020 19:12:21 +0100 Johannes Berg wrote:
> > > So I'm leaning towards reverting the whole thing. You can attach
> > > kretprobes and record the information you need in BPF maps.  
> > 
> > I'm not going to object to reverting it (and perhaps redoing it better
> > later), but I will point out that kretprobe isn't going to work, you
> > eventually need kcov_remote_start() to be called in strategic points
> > before processing the skb after it bounced through the system.
> > 
> > IOW, it's not really about serving userland, it's about enabling (and
> > later disabling) coverage collection for the bits of code it cares
> > about, mostly because collecting it for _everything_ is going to be too
> > slow and will mess up the data since for coverage guided fuzzing you
> > really need the reported coverage data to be only about the injected
> > fuzz data...
> 
> All you need is make kcov_remote_start_common() be BPF-able, like 
> the LSM hooks are now, right? And then BPF can return whatever handle 
> it pleases.

Not sure I understand. Are you saying something should call
"kcov_remote_start_common()" with, say, the SKB, and leave it to a mass
of bpf hooks to figure out where the SKB got cloned or copied or
whatnot, track that in a map, and then ... no, wait, I don't really see
what you mean, sorry.

IIUC, fundamentally, you have this:

 - at the beginning, a task is tagged with "please collect coverage
   data for this handle"
 - this task creates an SKB, etc, and all of the code that this task
   executes is captured and the coverage data is reported
 - However, the SKB traverses lots of things, gets copied, cloned, or
   whatnot, and eventually leaves the annotated task, say for further
   processing in softirq context or elsewhere.

Now since the whole point is to see what chaos this SKB created from
beginning (allocation) to end (free), since it was filled with fuzzed
data, you now have to figure out where to pick back up when the SKB is
processed further.

This is what the infrastructure was meant to solve. But note that the
SKB might be further cloned etc, so in order to track it you'd have to
(out-of-band) figure out all the possible places where it could
be reallocated, any time the skb pointer could change.

Then, when you know you've got interesting code on your hands, like in
mac80211 that was annotated in patch 3 here, you basically say

  "oohhh, this SKB was annotated before, let's continue capturing
   coverage data here"

(and turn it off again later by the corresponding kcov_remote_stop().


So the only way I could _possibly_ see how to do this would be to

 * capture all possible places where the skb pointer can change
 * still call something like skb_get_kcov_handle() but let it call out
   to a BPF program to query a map or something to figure out if this
   SKB has a handle attached to it

> Or if you don't like BPF or what to KCOV BPF itself in the future you
> can roll your own mechanism. The point is - this should be relatively
> easily doable out of line...

Seems pretty complicated to me though ...

johannes
Jakub Kicinski Nov. 21, 2020, 8:55 p.m. UTC | #8
On Sat, 21 Nov 2020 20:30:44 +0100 Johannes Berg wrote:
> On Sat, 2020-11-21 at 10:35 -0800, Jakub Kicinski wrote:
> > On Sat, 21 Nov 2020 19:12:21 +0100 Johannes Berg wrote:  
> > > > So I'm leaning towards reverting the whole thing. You can attach
> > > > kretprobes and record the information you need in BPF maps.    
> > > 
> > > I'm not going to object to reverting it (and perhaps redoing it better
> > > later), but I will point out that kretprobe isn't going to work, you
> > > eventually need kcov_remote_start() to be called in strategic points
> > > before processing the skb after it bounced through the system.
> > > 
> > > IOW, it's not really about serving userland, it's about enabling (and
> > > later disabling) coverage collection for the bits of code it cares
> > > about, mostly because collecting it for _everything_ is going to be too
> > > slow and will mess up the data since for coverage guided fuzzing you
> > > really need the reported coverage data to be only about the injected
> > > fuzz data...  
> > 
> > All you need is make kcov_remote_start_common() be BPF-able, like 
> > the LSM hooks are now, right? And then BPF can return whatever handle 
> > it pleases.  
> 
> Not sure I understand. Are you saying something should call
> "kcov_remote_start_common()" with, say, the SKB, and leave it to a mass
> of bpf hooks to figure out where the SKB got cloned or copied or
> whatnot, track that in a map, and then ... no, wait, I don't really see
> what you mean, sorry.
> 
> IIUC, fundamentally, you have this:
> 
>  - at the beginning, a task is tagged with "please collect coverage
>    data for this handle"

Write the tag into task local storage, or map indexed by PID.

>  - this task creates an SKB, etc, and all of the code that this task
>    executes is captured and the coverage data is reported

kprobe the right places to record the skb -> handle mapping.

>  - However, the SKB traverses lots of things, gets copied, cloned, or
>    whatnot, and eventually leaves the annotated task, say for further
>    processing in softirq context or elsewhere.

Which is fine.

> Now since the whole point is to see what chaos this SKB created from
> beginning (allocation) to end (free), since it was filled with fuzzed
> data, you now have to figure out where to pick back up when the SKB is
> processed further.
> 
> This is what the infrastructure was meant to solve. But note that the
> SKB might be further cloned etc, so in order to track it you'd have to
> (out-of-band) figure out all the possible places where it could
> be reallocated, any time the skb pointer could change.

Ack, you have to figure out all the places anyway, the question is
whether you put probes there or calls in the source code.

Shifting the maintenance burden but also BPF is flexibility.

> Then, when you know you've got interesting code on your hands, like in
> mac80211 that was annotated in patch 3 here, you basically say
> 
>   "oohhh, this SKB was annotated before, let's continue capturing
>    coverage data here"
> 
> (and turn it off again later by the corresponding kcov_remote_stop().

Yup, the point is you can feed a raw skb pointer (and all other
possible context you may want) to a BPF prog in kcov_remote_start() 
and let BPF/BTF give you the handle it recorded in its maps.

> So the only way I could _possibly_ see how to do this would be to
> 
>  * capture all possible places where the skb pointer can change
>  * still call something like skb_get_kcov_handle() but let it call out
>    to a BPF program to query a map or something to figure out if this
>    SKB has a handle attached to it
> 
> > Or if you don't like BPF or what to KCOV BPF itself in the future you
> > can roll your own mechanism. The point is - this should be relatively
> > easily doable out of line...  
> 
> Seems pretty complicated to me though ...

It is more complicated. We can go back to an skb field if this work is
expected to yield results for mac80211. Would you mind sending a patch?
Johannes Berg Nov. 21, 2020, 8:58 p.m. UTC | #9
On Sat, 2020-11-21 at 12:55 -0800, Jakub Kicinski wrote:
> [snip]
> Ack, you have to figure out all the places anyway, the question is
> whether you put probes there or calls in the source code.
> 
> Shifting the maintenance burden but also BPF is flexibility.

Yeah, true. Though I'd argue also visibility - this stuff is pretty
simple now, if it gets into lots of lines of BPF code to track it that
is maintained "elsewhere", we won't see the bugs in it :-)

And it's kinda a thing that we as kernel developers _should_ be the ones
looking at since it's testing our code.

> Yup, the point is you can feed a raw skb pointer (and all other
> possible context you may want) to a BPF prog in kcov_remote_start() 
> and let BPF/BTF give you the handle it recorded in its maps.

Yeah, it's possible. Personally, I don't think it's worth the
complexity.

> It is more complicated. We can go back to an skb field if this work is
> expected to yield results for mac80211. Would you mind sending a patch?

I can do that, but I'm not going to be able to do it now/tonight (GMT+1
here), so probably only Monday/Tuesday or so, sorry.

johannes
Jakub Kicinski Nov. 21, 2020, 9:02 p.m. UTC | #10
On Sat, 21 Nov 2020 21:58:37 +0100 Johannes Berg wrote:
> On Sat, 2020-11-21 at 12:55 -0800, Jakub Kicinski wrote:
> > It is more complicated. We can go back to an skb field if this work is
> > expected to yield results for mac80211. Would you mind sending a patch?  
> 
> I can do that, but I'm not going to be able to do it now/tonight (GMT+1
> here), so probably only Monday/Tuesday or so, sorry.

Oh yea, no worries, took someone a month to notice this is broken, 
as long as it's fixed before the merge window it's fine ;)
Marco Elver Nov. 25, 2020, 4:30 p.m. UTC | #11
On Sat, 21 Nov 2020 at 22:02, Jakub Kicinski <kuba@kernel.org> wrote:
> On Sat, 21 Nov 2020 21:58:37 +0100 Johannes Berg wrote:
> > On Sat, 2020-11-21 at 12:55 -0800, Jakub Kicinski wrote:
> > > It is more complicated. We can go back to an skb field if this work is
> > > expected to yield results for mac80211. Would you mind sending a patch?
> >
> > I can do that, but I'm not going to be able to do it now/tonight (GMT+1
> > here), so probably only Monday/Tuesday or so, sorry.
>
> Oh yea, no worries, took someone a month to notice this is broken,
> as long as it's fixed before the merge window it's fine ;)

I took the liberty of taking patch 2/3 from v2 which was still storing
kcov_handle in sk_buff, and resending with the updates to patch 3/3:
  https://lkml.kernel.org/r/20201125162455.1690502-1-elver@google.com

Thanks,
-- Marco
Jakub Kicinski Dec. 1, 2020, 1:52 a.m. UTC | #12
On Sat, 21 Nov 2020 18:09:41 +0200 Ido Schimmel wrote:
> + Florian
> 
> On Thu, Oct 29, 2020 at 05:36:19PM +0000, Aleksandr Nogikh wrote:
> > From: Aleksandr Nogikh <nogikh@google.com>
> > 
> > Remote KCOV coverage collection enables coverage-guided fuzzing of the
> > code that is not reachable during normal system call execution. It is
> > especially helpful for fuzzing networking subsystems, where it is
> > common to perform packet handling in separate work queues even for the
> > packets that originated directly from the user space.
> > 
> > Enable coverage-guided frame injection by adding kcov remote handle to
> > skb extensions. Default initialization in __alloc_skb and
> > __build_skb_around ensures that no socket buffer that was generated
> > during a system call will be missed.
> > 
> > Code that is of interest and that performs packet processing should be
> > annotated with kcov_remote_start()/kcov_remote_stop().
> > 
> > An alternative approach is to determine kcov_handle solely on the
> > basis of the device/interface that received the specific socket
> > buffer. However, in this case it would be impossible to distinguish
> > between packets that originated during normal background network
> > processes or were intentionally injected from the user space.
> > 
> > Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
> > Acked-by: Willem de Bruijn <willemb@google.com>  
> 
> [...]
> 
> > @@ -249,6 +249,9 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
> >  
> >  		fclones->skb2.fclone = SKB_FCLONE_CLONE;
> >  	}
> > +
> > +	skb_set_kcov_handle(skb, kcov_common_handle());  
> 
> Hi,
> 
> This causes skb extensions to be allocated for the allocated skb, but
> there are instances that blindly overwrite 'skb->extensions' by invoking
> skb_copy_header() after __alloc_skb(). For example, skb_copy(),
> __pskb_copy_fclone() and skb_copy_expand(). This results in the skb
> extensions being leaked [1].
> 
> One possible solution is to try to patch all these instances with
> skb_ext_put() before skb_copy_header().
> 
> Another possible solution is to convert skb_copy_header() to use
> skb_ext_copy() instead of __skb_ext_copy(). It will first drop the
> reference on the skb extensions of the new skb, but it assumes that
> 'skb->active_extensions' is valid. This is not the case in the
> skb_clone() path so we should probably zero this field in __skb_clone().
> 
> Other suggestions?

Looking at the patch from Marco to move back to a field now I'm
wondering how you run into this, Ido :D

AFAIU the extension is only added if process as a KCOV handle.

Are you using KCOV?

> [1]
> BUG: memory leak
> unreferenced object 0xffff888027f9a490 (size 16):
>   comm "syz-executor.0", pid 1155, jiffies 4295996826 (age 66.927s)
>   hex dump (first 16 bytes):
>     01 00 00 00 01 02 6b 6b 01 00 00 00 00 00 00 00  ......kk........
>   backtrace:
>     [<0000000005a5f2c4>] kmemleak_alloc_recursive include/linux/kmemleak.h:43 [inline]
>     [<0000000005a5f2c4>] slab_post_alloc_hook mm/slab.h:528 [inline]
>     [<0000000005a5f2c4>] slab_alloc_node mm/slub.c:2891 [inline]
>     [<0000000005a5f2c4>] slab_alloc mm/slub.c:2899 [inline]
>     [<0000000005a5f2c4>] kmem_cache_alloc+0x173/0x800 mm/slub.c:2904
>     [<00000000c5e43ea9>] __skb_ext_alloc+0x22/0x90 net/core/skbuff.c:6173
>     [<000000000de35e81>] skb_ext_add+0x230/0x4a0 net/core/skbuff.c:6268
>     [<000000003b7efba4>] skb_set_kcov_handle include/linux/skbuff.h:4622 [inline]
>     [<000000003b7efba4>] skb_set_kcov_handle include/linux/skbuff.h:4612 [inline]
>     [<000000003b7efba4>] __alloc_skb+0x47f/0x6a0 net/core/skbuff.c:253
>     [<000000007f789b23>] skb_copy+0x151/0x310 net/core/skbuff.c:1512
>     [<000000001ce26864>] mlxsw_emad_transmit+0x4e/0x620 drivers/net/ethernet/mellanox/mlxsw/core.c:585
>     [<000000005c732123>] mlxsw_emad_reg_access drivers/net/ethernet/mellanox/mlxsw/core.c:829 [inline]
>     [<000000005c732123>] mlxsw_core_reg_access_emad+0xda8/0x1770 drivers/net/ethernet/mellanox/mlxsw/core.c:2408
>     [<00000000c07840b3>] mlxsw_core_reg_access+0x101/0x7f0 drivers/net/ethernet/mellanox/mlxsw/core.c:2583
>     [<000000007c47f30f>] mlxsw_reg_write+0x30/0x40 drivers/net/ethernet/mellanox/mlxsw/core.c:2603
>     [<00000000675e3fc7>] mlxsw_sp_port_admin_status_set+0x8a7/0x980 drivers/net/ethernet/mellanox/mlxsw/spectrum.c:300
>     [<00000000fefe35a4>] mlxsw_sp_port_stop+0x63/0x70 drivers/net/ethernet/mellanox/mlxsw/spectrum.c:537
>     [<00000000c41390e8>] __dev_close_many+0x1c7/0x300 net/core/dev.c:1607
>     [<00000000628c5987>] __dev_close net/core/dev.c:1619 [inline]
>     [<00000000628c5987>] __dev_change_flags+0x2b9/0x710 net/core/dev.c:8421
>     [<000000008cc810c6>] dev_change_flags+0x97/0x170 net/core/dev.c:8494
>     [<0000000053274a78>] do_setlink+0xa5b/0x3b80 net/core/rtnetlink.c:2706
>     [<00000000e4085785>] rtnl_group_changelink net/core/rtnetlink.c:3225 [inline]
>     [<00000000e4085785>] __rtnl_newlink+0xe06/0x17d0 net/core/rtnetlink.c:3379
Ido Schimmel Dec. 1, 2020, 7:35 a.m. UTC | #13
On Mon, Nov 30, 2020 at 05:52:48PM -0800, Jakub Kicinski wrote:
> On Sat, 21 Nov 2020 18:09:41 +0200 Ido Schimmel wrote:
> > + Florian
> > 
> > On Thu, Oct 29, 2020 at 05:36:19PM +0000, Aleksandr Nogikh wrote:
> > > From: Aleksandr Nogikh <nogikh@google.com>
> > > 
> > > Remote KCOV coverage collection enables coverage-guided fuzzing of the
> > > code that is not reachable during normal system call execution. It is
> > > especially helpful for fuzzing networking subsystems, where it is
> > > common to perform packet handling in separate work queues even for the
> > > packets that originated directly from the user space.
> > > 
> > > Enable coverage-guided frame injection by adding kcov remote handle to
> > > skb extensions. Default initialization in __alloc_skb and
> > > __build_skb_around ensures that no socket buffer that was generated
> > > during a system call will be missed.
> > > 
> > > Code that is of interest and that performs packet processing should be
> > > annotated with kcov_remote_start()/kcov_remote_stop().
> > > 
> > > An alternative approach is to determine kcov_handle solely on the
> > > basis of the device/interface that received the specific socket
> > > buffer. However, in this case it would be impossible to distinguish
> > > between packets that originated during normal background network
> > > processes or were intentionally injected from the user space.
> > > 
> > > Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
> > > Acked-by: Willem de Bruijn <willemb@google.com>  
> > 
> > [...]
> > 
> > > @@ -249,6 +249,9 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
> > >  
> > >  		fclones->skb2.fclone = SKB_FCLONE_CLONE;
> > >  	}
> > > +
> > > +	skb_set_kcov_handle(skb, kcov_common_handle());  
> > 
> > Hi,
> > 
> > This causes skb extensions to be allocated for the allocated skb, but
> > there are instances that blindly overwrite 'skb->extensions' by invoking
> > skb_copy_header() after __alloc_skb(). For example, skb_copy(),
> > __pskb_copy_fclone() and skb_copy_expand(). This results in the skb
> > extensions being leaked [1].
> > 
> > One possible solution is to try to patch all these instances with
> > skb_ext_put() before skb_copy_header().
> > 
> > Another possible solution is to convert skb_copy_header() to use
> > skb_ext_copy() instead of __skb_ext_copy(). It will first drop the
> > reference on the skb extensions of the new skb, but it assumes that
> > 'skb->active_extensions' is valid. This is not the case in the
> > skb_clone() path so we should probably zero this field in __skb_clone().
> > 
> > Other suggestions?
> 
> Looking at the patch from Marco to move back to a field now I'm
> wondering how you run into this, Ido :D
> 
> AFAIU the extension is only added if process as a KCOV handle.
> 
> Are you using KCOV?

Hi Jakub,

Yes. We have an internal syzkaller instance where this is enabled. See
"syz-executor.0" in the trace below.

> 
> > [1]
> > BUG: memory leak
> > unreferenced object 0xffff888027f9a490 (size 16):
> >   comm "syz-executor.0", pid 1155, jiffies 4295996826 (age 66.927s)
> >   hex dump (first 16 bytes):
> >     01 00 00 00 01 02 6b 6b 01 00 00 00 00 00 00 00  ......kk........
> >   backtrace:
> >     [<0000000005a5f2c4>] kmemleak_alloc_recursive include/linux/kmemleak.h:43 [inline]
> >     [<0000000005a5f2c4>] slab_post_alloc_hook mm/slab.h:528 [inline]
> >     [<0000000005a5f2c4>] slab_alloc_node mm/slub.c:2891 [inline]
> >     [<0000000005a5f2c4>] slab_alloc mm/slub.c:2899 [inline]
> >     [<0000000005a5f2c4>] kmem_cache_alloc+0x173/0x800 mm/slub.c:2904
> >     [<00000000c5e43ea9>] __skb_ext_alloc+0x22/0x90 net/core/skbuff.c:6173
> >     [<000000000de35e81>] skb_ext_add+0x230/0x4a0 net/core/skbuff.c:6268
> >     [<000000003b7efba4>] skb_set_kcov_handle include/linux/skbuff.h:4622 [inline]
> >     [<000000003b7efba4>] skb_set_kcov_handle include/linux/skbuff.h:4612 [inline]
> >     [<000000003b7efba4>] __alloc_skb+0x47f/0x6a0 net/core/skbuff.c:253
> >     [<000000007f789b23>] skb_copy+0x151/0x310 net/core/skbuff.c:1512
> >     [<000000001ce26864>] mlxsw_emad_transmit+0x4e/0x620 drivers/net/ethernet/mellanox/mlxsw/core.c:585
> >     [<000000005c732123>] mlxsw_emad_reg_access drivers/net/ethernet/mellanox/mlxsw/core.c:829 [inline]
> >     [<000000005c732123>] mlxsw_core_reg_access_emad+0xda8/0x1770 drivers/net/ethernet/mellanox/mlxsw/core.c:2408
> >     [<00000000c07840b3>] mlxsw_core_reg_access+0x101/0x7f0 drivers/net/ethernet/mellanox/mlxsw/core.c:2583
> >     [<000000007c47f30f>] mlxsw_reg_write+0x30/0x40 drivers/net/ethernet/mellanox/mlxsw/core.c:2603
> >     [<00000000675e3fc7>] mlxsw_sp_port_admin_status_set+0x8a7/0x980 drivers/net/ethernet/mellanox/mlxsw/spectrum.c:300
> >     [<00000000fefe35a4>] mlxsw_sp_port_stop+0x63/0x70 drivers/net/ethernet/mellanox/mlxsw/spectrum.c:537
> >     [<00000000c41390e8>] __dev_close_many+0x1c7/0x300 net/core/dev.c:1607
> >     [<00000000628c5987>] __dev_close net/core/dev.c:1619 [inline]
> >     [<00000000628c5987>] __dev_change_flags+0x2b9/0x710 net/core/dev.c:8421
> >     [<000000008cc810c6>] dev_change_flags+0x97/0x170 net/core/dev.c:8494
> >     [<0000000053274a78>] do_setlink+0xa5b/0x3b80 net/core/rtnetlink.c:2706
> >     [<00000000e4085785>] rtnl_group_changelink net/core/rtnetlink.c:3225 [inline]
> >     [<00000000e4085785>] __rtnl_newlink+0xe06/0x17d0 net/core/rtnetlink.c:3379
>
Jakub Kicinski Dec. 1, 2020, 4:43 p.m. UTC | #14
On Tue, 1 Dec 2020 09:35:29 +0200 Ido Schimmel wrote:
> > Looking at the patch from Marco to move back to a field now I'm

> > wondering how you run into this, Ido :D

> > 

> > AFAIU the extension is only added if process as a KCOV handle.

> > 

> > Are you using KCOV?  

> 

> Hi Jakub,

> 

> Yes. We have an internal syzkaller instance where this is enabled. See

> "syz-executor.0" in the trace below.


I see, thanks! The world makes sense again :)
diff mbox series

Patch

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index a828cf99c521..d1cc1597d566 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -4150,6 +4150,9 @@  enum skb_ext_id {
 #endif
 #if IS_ENABLED(CONFIG_MPTCP)
 	SKB_EXT_MPTCP,
+#endif
+#if IS_ENABLED(CONFIG_KCOV)
+	SKB_EXT_KCOV_HANDLE,
 #endif
 	SKB_EXT_NUM, /* must be last */
 };
@@ -4605,5 +4608,38 @@  static inline void skb_reset_redirect(struct sk_buff *skb)
 #endif
 }
 
+#ifdef CONFIG_KCOV
+
+static inline void skb_set_kcov_handle(struct sk_buff *skb, const u64 kcov_handle)
+{
+	/* Do not allocate skb extensions only to set kcov_handle to zero
+	 * (as it is zero by default). However, if the extensions are
+	 * already allocated, update kcov_handle anyway since
+	 * skb_set_kcov_handle can be called to zero a previously set
+	 * value.
+	 */
+	if (skb_has_extensions(skb) || kcov_handle) {
+		u64 *kcov_handle_ptr = skb_ext_add(skb, SKB_EXT_KCOV_HANDLE);
+
+		if (kcov_handle_ptr)
+			*kcov_handle_ptr = kcov_handle;
+	}
+}
+
+static inline u64 skb_get_kcov_handle(struct sk_buff *skb)
+{
+	u64 *kcov_handle = skb_ext_find(skb, SKB_EXT_KCOV_HANDLE);
+
+	return kcov_handle ? *kcov_handle : 0;
+}
+
+#else
+
+static inline void skb_set_kcov_handle(struct sk_buff *skb, const u64 kcov_handle) { }
+
+static inline u64 skb_get_kcov_handle(struct sk_buff *skb) { return 0; }
+
+#endif /* CONFIG_KCOV */
+
 #endif	/* __KERNEL__ */
 #endif	/* _LINUX_SKBUFF_H */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 537cf3c2937d..9df33cf81d2b 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1873,6 +1873,7 @@  config KCOV
 	depends on CC_HAS_SANCOV_TRACE_PC || GCC_PLUGINS
 	select DEBUG_FS
 	select GCC_PLUGIN_SANCOV if !CC_HAS_SANCOV_TRACE_PC
+	select SKB_EXTENSIONS
 	help
 	  KCOV exposes kernel code coverage information in a form suitable
 	  for coverage-guided fuzzing (randomized testing).
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 1ba8f0163744..c5e6c0b83a92 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -249,6 +249,9 @@  struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
 
 		fclones->skb2.fclone = SKB_FCLONE_CLONE;
 	}
+
+	skb_set_kcov_handle(skb, kcov_common_handle());
+
 out:
 	return skb;
 nodata:
@@ -282,6 +285,8 @@  static struct sk_buff *__build_skb_around(struct sk_buff *skb,
 	memset(shinfo, 0, offsetof(struct skb_shared_info, dataref));
 	atomic_set(&shinfo->dataref, 1);
 
+	skb_set_kcov_handle(skb, kcov_common_handle());
+
 	return skb;
 }
 
@@ -4203,6 +4208,9 @@  static const u8 skb_ext_type_len[] = {
 #if IS_ENABLED(CONFIG_MPTCP)
 	[SKB_EXT_MPTCP] = SKB_EXT_CHUNKSIZEOF(struct mptcp_ext),
 #endif
+#if IS_ENABLED(CONFIG_KCOV)
+	[SKB_EXT_KCOV_HANDLE] = SKB_EXT_CHUNKSIZEOF(u64),
+#endif
 };
 
 static __always_inline unsigned int skb_ext_total_length(void)
@@ -4219,6 +4227,9 @@  static __always_inline unsigned int skb_ext_total_length(void)
 #endif
 #if IS_ENABLED(CONFIG_MPTCP)
 		skb_ext_type_len[SKB_EXT_MPTCP] +
+#endif
+#if IS_ENABLED(CONFIG_KCOV)
+		skb_ext_type_len[SKB_EXT_KCOV_HANDLE] +
 #endif
 		0;
 }