diff mbox series

[1/1] net: cdc_ncm: Allow for dwNtbOutMaxSize to be unset or zero

Message ID 20211202143437.1411410-1-lee.jones@linaro.org
State New
Headers show
Series [1/1] net: cdc_ncm: Allow for dwNtbOutMaxSize to be unset or zero | expand

Commit Message

Lee Jones Dec. 2, 2021, 2:34 p.m. UTC
Currently, due to the sequential use of min_t() and clamp_t() macros,
in cdc_ncm_check_tx_max(), if dwNtbOutMaxSize is not set, the logic
sets tx_max to 0.  This is then used to allocate the data area of the
SKB requested later in cdc_ncm_fill_tx_frame().

This does not cause an issue presently because when memory is
allocated during initialisation phase of SKB creation, more memory
(512b) is allocated than is required for the SKB headers alone (320b),
leaving some space (512b - 320b = 192b) for CDC data (172b).

However, if more elements (for example 3 x u64 = [24b]) were added to
one of the SKB header structs, say 'struct skb_shared_info',
increasing its original size (320b [320b aligned]) to something larger
(344b [384b aligned]), then suddenly the CDC data (172b) no longer
fits in the spare SKB data area (512b - 384b = 128b).

Consequently the SKB bounds checking semantics fails and panics:

  skbuff: skb_over_panic: text:ffffffff830a5b5f len:184 put:172   \
     head:ffff888119227c00 data:ffff888119227c00 tail:0xb8 end:0x80 dev:<NULL>

  ------------[ cut here ]------------
  kernel BUG at net/core/skbuff.c:110!
  RIP: 0010:skb_panic+0x14f/0x160 net/core/skbuff.c:106
  <snip>
  Call Trace:
   <IRQ>
   skb_over_panic+0x2c/0x30 net/core/skbuff.c:115
   skb_put+0x205/0x210 net/core/skbuff.c:1877
   skb_put_zero include/linux/skbuff.h:2270 [inline]
   cdc_ncm_ndp16 drivers/net/usb/cdc_ncm.c:1116 [inline]
   cdc_ncm_fill_tx_frame+0x127f/0x3d50 drivers/net/usb/cdc_ncm.c:1293
   cdc_ncm_tx_fixup+0x98/0xf0 drivers/net/usb/cdc_ncm.c:1514

By overriding the max value with the default CDC_NCM_NTB_MAX_SIZE_TX
when not offered through the system provided params, we ensure enough
data space is allocated to handle the CDC data, meaning no crash will
occur.

Cc: stable@vger.kernel.org
Cc: Oliver Neukum <oliver@neukum.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: linux-usb@vger.kernel.org
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Fixes: 289507d3364f9 ("net: cdc_ncm: use sysfs for rx/tx aggregation tuning")
Signed-off-by: Lee Jones <lee.jones@linaro.org>
---
 drivers/net/usb/cdc_ncm.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Jakub Kicinski Dec. 3, 2021, 1:51 a.m. UTC | #1
On Thu,  2 Dec 2021 14:34:37 +0000 Lee Jones wrote:
> Currently, due to the sequential use of min_t() and clamp_t() macros,
> in cdc_ncm_check_tx_max(), if dwNtbOutMaxSize is not set, the logic
> sets tx_max to 0.  This is then used to allocate the data area of the
> SKB requested later in cdc_ncm_fill_tx_frame().
> 
> This does not cause an issue presently because when memory is
> allocated during initialisation phase of SKB creation, more memory
> (512b) is allocated than is required for the SKB headers alone (320b),
> leaving some space (512b - 320b = 192b) for CDC data (172b).
> 
> However, if more elements (for example 3 x u64 = [24b]) were added to
> one of the SKB header structs, say 'struct skb_shared_info',
> increasing its original size (320b [320b aligned]) to something larger
> (344b [384b aligned]), then suddenly the CDC data (172b) no longer
> fits in the spare SKB data area (512b - 384b = 128b).
> 
> Consequently the SKB bounds checking semantics fails and panics:
> 
>   skbuff: skb_over_panic: text:ffffffff830a5b5f len:184 put:172   \
>      head:ffff888119227c00 data:ffff888119227c00 tail:0xb8 end:0x80 dev:<NULL>
> 
>   ------------[ cut here ]------------
>   kernel BUG at net/core/skbuff.c:110!
>   RIP: 0010:skb_panic+0x14f/0x160 net/core/skbuff.c:106
>   <snip>
>   Call Trace:
>    <IRQ>
>    skb_over_panic+0x2c/0x30 net/core/skbuff.c:115
>    skb_put+0x205/0x210 net/core/skbuff.c:1877
>    skb_put_zero include/linux/skbuff.h:2270 [inline]
>    cdc_ncm_ndp16 drivers/net/usb/cdc_ncm.c:1116 [inline]
>    cdc_ncm_fill_tx_frame+0x127f/0x3d50 drivers/net/usb/cdc_ncm.c:1293
>    cdc_ncm_tx_fixup+0x98/0xf0 drivers/net/usb/cdc_ncm.c:1514
> 
> By overriding the max value with the default CDC_NCM_NTB_MAX_SIZE_TX
> when not offered through the system provided params, we ensure enough
> data space is allocated to handle the CDC data, meaning no crash will
> occur.
> 
> Cc: stable@vger.kernel.org
> Cc: Oliver Neukum <oliver@neukum.org>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: linux-usb@vger.kernel.org
> Cc: netdev@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Fixes: 289507d3364f9 ("net: cdc_ncm: use sysfs for rx/tx aggregation tuning")
> Signed-off-by: Lee Jones <lee.jones@linaro.org>

CC: bjorn@mork.no

Please make sure you CC the authors of all blamed commits as they are
likely to have the most context.

> diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
> index 24753a4da7e60..e303b522efb50 100644
> --- a/drivers/net/usb/cdc_ncm.c
> +++ b/drivers/net/usb/cdc_ncm.c
> @@ -181,6 +181,8 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
>  		min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
>  
>  	max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
> +	if (max == 0)
> +		max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
>  
>  	/* some devices set dwNtbOutMaxSize too low for the above default */
>  	min = min(min, max);
Lee Jones Dec. 3, 2021, 11:25 a.m. UTC | #2
On Fri, 03 Dec 2021, Bjørn Mork wrote:

> Hello Lee!
> 
> Jakub Kicinski <kuba@kernel.org> writes:
> 
> > On Thu,  2 Dec 2021 14:34:37 +0000 Lee Jones wrote:
> >> Currently, due to the sequential use of min_t() and clamp_t() macros,
> >> in cdc_ncm_check_tx_max(), if dwNtbOutMaxSize is not set, the logic
> >> sets tx_max to 0.  This is then used to allocate the data area of the
> >> SKB requested later in cdc_ncm_fill_tx_frame().
> >> 
> >> This does not cause an issue presently because when memory is
> >> allocated during initialisation phase of SKB creation, more memory
> >> (512b) is allocated than is required for the SKB headers alone (320b),
> >> leaving some space (512b - 320b = 192b) for CDC data (172b).
> >> 
> >> However, if more elements (for example 3 x u64 = [24b]) were added to
> >> one of the SKB header structs, say 'struct skb_shared_info',
> >> increasing its original size (320b [320b aligned]) to something larger
> >> (344b [384b aligned]), then suddenly the CDC data (172b) no longer
> >> fits in the spare SKB data area (512b - 384b = 128b).
> >> 
> >> Consequently the SKB bounds checking semantics fails and panics:
> >> 
> >>   skbuff: skb_over_panic: text:ffffffff830a5b5f len:184 put:172   \
> >>      head:ffff888119227c00 data:ffff888119227c00 tail:0xb8 end:0x80 dev:<NULL>
> >> 
> >>   ------------[ cut here ]------------
> >>   kernel BUG at net/core/skbuff.c:110!
> >>   RIP: 0010:skb_panic+0x14f/0x160 net/core/skbuff.c:106
> >>   <snip>
> >>   Call Trace:
> >>    <IRQ>
> >>    skb_over_panic+0x2c/0x30 net/core/skbuff.c:115
> >>    skb_put+0x205/0x210 net/core/skbuff.c:1877
> >>    skb_put_zero include/linux/skbuff.h:2270 [inline]
> >>    cdc_ncm_ndp16 drivers/net/usb/cdc_ncm.c:1116 [inline]
> >>    cdc_ncm_fill_tx_frame+0x127f/0x3d50 drivers/net/usb/cdc_ncm.c:1293
> >>    cdc_ncm_tx_fixup+0x98/0xf0 drivers/net/usb/cdc_ncm.c:1514
> >> 
> >> By overriding the max value with the default CDC_NCM_NTB_MAX_SIZE_TX
> >> when not offered through the system provided params, we ensure enough
> >> data space is allocated to handle the CDC data, meaning no crash will
> >> occur.
> 
> Just out of curiouslity: Is this a real device, or was this the result
> of fuzzing around?

This is the result of "fuzzing around" on qemu. :)

https://syzkaller.appspot.com/bug?extid=2c9b6751e87ab8706cb3

> Not that it matters - it's obviously a bug to fix in any case.  Good catch!
> 
> (We probably have many more of the same, assuming the device presents
> semi-sane values in the NCM parameter struct)
> 
> >> diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
> >> index 24753a4da7e60..e303b522efb50 100644
> >> --- a/drivers/net/usb/cdc_ncm.c
> >> +++ b/drivers/net/usb/cdc_ncm.c
> >> @@ -181,6 +181,8 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
> >>  		min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
> >>  
> >>  	max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
> >> +	if (max == 0)
> >> +		max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
> >>  
> >>  	/* some devices set dwNtbOutMaxSize too low for the above default */
> >>  	min = min(min, max);
> 
> It's been a while since I looked at this, so excuse me if I read it
> wrongly.  But I think we need to catch more illegal/impossible values
> than just zero here?  Any buffer size which cannot hold a single
> datagram is pointless.
> 
> Trying to figure out what I possible meant to do with that
> 
>  	min = min(min, max);
> 
> I don't think it makes any sense?  Does it?  The "min" value we've
> carefully calculated allow one max sized datagram and headers. I don't
> think we should ever continue with a smaller buffer than that

I was more confused with the comment you added to that code:

   /* some devices set dwNtbOutMaxSize too low for the above default */
   min = min(min, max);

... which looks as though it should solve the issue of an inadequate
dwNtbOutMaxSize, but it almost does the opposite.  I initially
changed this segment to use the max() macro instead, but the
subsequent clamp_t() macro simply chooses 'max' (0) value over the now
sane 'min' one.

Which is why I chose 
> Or are there cases where this is valid?

I'm not an expert on the SKB code, but in my simple view of the world,
if you wish to use a buffer for any amount of data, you should
allocate space for it.

> So that really should haven been catching this bug with a
> 
>   max = max(min, max)

I tried this.  It didn't work either.

See the subsequent clamp_t() call a few lines down.

> or maybe more readable
> 
>   if (max < min)
>      max = min
> 
> What do you think?

So the data that is added to the SKB is ctx->max_ndp_size, which is
allocated in cdc_ncm_init().  The code that does it looks like:

   if (ctx->is_ndp16)                                                                                         
        ctx->max_ndp_size = sizeof(struct usb_cdc_ncm_ndp16) +
	                    (ctx->tx_max_datagrams + 1) *
			    sizeof(struct usb_cdc_ncm_dpe16);                                                                                               
    else                                                                                                       
        ctx->max_ndp_size = sizeof(struct usb_cdc_ncm_ndp32) +
	                    (ctx->tx_max_datagrams + 1) *
			    sizeof(struct usb_cdc_ncm_dpe32);  

So this should be the size of the allocation too, right?

Why would the platform ever need to over-ride this?  The platform
can't make the data area smaller since there won't be enough room.  It
could perhaps make it bigger, but the min_t() and clamp_t() macros
will end up choosing the above allocation anyway.

This leaves me feeling a little perplexed.

If there isn't a good reason for over-riding then I could simplify
cdc_ncm_check_tx_max() greatly.

What do *you* think? :)
Bjørn Mork Dec. 3, 2021, 12:57 p.m. UTC | #3
Lee Jones <lee.jones@linaro.org> writes:
> On Fri, 03 Dec 2021, Bjørn Mork wrote:
>
>> Just out of curiouslity: Is this a real device, or was this the result
>> of fuzzing around?
>
> This is the result of "fuzzing around" on qemu. :)
>
> https://syzkaller.appspot.com/bug?extid=2c9b6751e87ab8706cb3

OK.  Makes sense.  I'd be surprised of such a device worked on that
other OS.

>> Not that it matters - it's obviously a bug to fix in any case.  Good catch!
>> 
>> (We probably have many more of the same, assuming the device presents
>> semi-sane values in the NCM parameter struct)
>> 
>> >> diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
>> >> index 24753a4da7e60..e303b522efb50 100644
>> >> --- a/drivers/net/usb/cdc_ncm.c
>> >> +++ b/drivers/net/usb/cdc_ncm.c
>> >> @@ -181,6 +181,8 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
>> >>  		min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
>> >>  
>> >>  	max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
>> >> +	if (max == 0)
>> >> +		max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
>> >>  
>> >>  	/* some devices set dwNtbOutMaxSize too low for the above default */
>> >>  	min = min(min, max);
>> 
>> It's been a while since I looked at this, so excuse me if I read it
>> wrongly.  But I think we need to catch more illegal/impossible values
>> than just zero here?  Any buffer size which cannot hold a single
>> datagram is pointless.
>> 
>> Trying to figure out what I possible meant to do with that
>> 
>>  	min = min(min, max);
>> 
>> I don't think it makes any sense?  Does it?  The "min" value we've
>> carefully calculated allow one max sized datagram and headers. I don't
>> think we should ever continue with a smaller buffer than that
>
> I was more confused with the comment you added to that code:
>
>    /* some devices set dwNtbOutMaxSize too low for the above default */
>    min = min(min, max);
>
> ... which looks as though it should solve the issue of an inadequate
> dwNtbOutMaxSize, but it almost does the opposite.

That's what I read too.  I must admit that I cannot remember writing any
of this stuff.  But I trust git...

> I initially
> changed this segment to use the max() macro instead, but the
> subsequent clamp_t() macro simply chooses 'max' (0) value over the now
> sane 'min' one.

Yes, but what if we adjust max here instead of min?

> Which is why I chose 
>> Or are there cases where this is valid?
>
> I'm not an expert on the SKB code, but in my simple view of the world,
> if you wish to use a buffer for any amount of data, you should
> allocate space for it.
>
>> So that really should haven been catching this bug with a
>> 
>>   max = max(min, max)
>
> I tried this.  It didn't work either.
>
> See the subsequent clamp_t() call a few lines down.

This I don't understand.  If we have for example

 new_tx = 0
 max = 0
 min = 1514(=datagram) + 8(=ndp) + 2(=1+1) * 4(=dpe) + 12(=nth) = 1542

then

 max = max(min, max) = 1542
 val = clamp_t(u32, new_tx, min, max) = 1542

so we return 1542 and everything is fine.

>> or maybe more readable
>> 
>>   if (max < min)
>>      max = min
>> 
>> What do you think?
>
> So the data that is added to the SKB is ctx->max_ndp_size, which is
> allocated in cdc_ncm_init().  The code that does it looks like:
>
>    if (ctx->is_ndp16)                                                                                         
>         ctx->max_ndp_size = sizeof(struct usb_cdc_ncm_ndp16) +
> 	                    (ctx->tx_max_datagrams + 1) *
> 			    sizeof(struct usb_cdc_ncm_dpe16);                                                                                               
>     else                                                                                                       
>         ctx->max_ndp_size = sizeof(struct usb_cdc_ncm_ndp32) +
> 	                    (ctx->tx_max_datagrams + 1) *
> 			    sizeof(struct usb_cdc_ncm_dpe32);  
>
> So this should be the size of the allocation too, right?

This driver doesn't add data to the skb.  It allocates a new buffer and
copies one or more skbs into it.  I'm sure that could be improved too..

Without a complete rewrite we need to allocate new skbs large enough to hold

NTH          - frame header
NDP x 1      - index table, with minimum two entries (1 datagram + terminator)
datagram x 1 - ethernet frame

This gives the minimum "tx_max" value.

The device is supposed to tell us the maximum "tx_max" value in
dwNtbOutMaxSize.  In theory.  In practice we cannot trust the device, as
you point out.  We know aleady deal with too large values (which are
commonly seen in real products), but we also need to deal with too low
values.

I believe the "too low" is defined by the calculated minimum value, and
the comment indicates that this what I tried to express but failed.


> Why would the platform ever need to over-ride this?  The platform
> can't make the data area smaller since there won't be enough room.  It
> could perhaps make it bigger, but the min_t() and clamp_t() macros
> will end up choosing the above allocation anyway.
>
> This leaves me feeling a little perplexed.
>
> If there isn't a good reason for over-riding then I could simplify
> cdc_ncm_check_tx_max() greatly.
>
> What do *you* think? :)

I also have the feeling that this could and should be simplified. This
discussion shows that refactoring is required.  git blame makes this all
too embarrassing ;-)



Bjørn
Lee Jones Dec. 3, 2021, 1:39 p.m. UTC | #4
On Fri, 03 Dec 2021, Bjørn Mork wrote:
> >> It's been a while since I looked at this, so excuse me if I read it
> >> wrongly.  But I think we need to catch more illegal/impossible values
> >> than just zero here?  Any buffer size which cannot hold a single
> >> datagram is pointless.
> >> 
> >> Trying to figure out what I possible meant to do with that
> >> 
> >>  	min = min(min, max);
> >> 
> >> I don't think it makes any sense?  Does it?  The "min" value we've
> >> carefully calculated allow one max sized datagram and headers. I don't
> >> think we should ever continue with a smaller buffer than that
> >
> > I was more confused with the comment you added to that code:
> >
> >    /* some devices set dwNtbOutMaxSize too low for the above default */
> >    min = min(min, max);
> >
> > ... which looks as though it should solve the issue of an inadequate
> > dwNtbOutMaxSize, but it almost does the opposite.
> 
> That's what I read too.  I must admit that I cannot remember writing any
> of this stuff.  But I trust git...

In Git we trust!

> > I initially
> > changed this segment to use the max() macro instead, but the
> > subsequent clamp_t() macro simply chooses 'max' (0) value over the now
> > sane 'min' one.
> 
> Yes, but what if we adjust max here instead of min?

That's what my patch does.

> > Which is why I chose 
> >> Or are there cases where this is valid?
> >
> > I'm not an expert on the SKB code, but in my simple view of the world,
> > if you wish to use a buffer for any amount of data, you should
> > allocate space for it.
> >
> >> So that really should haven been catching this bug with a
> >> 
> >>   max = max(min, max)
> >
> > I tried this.  It didn't work either.
> >
> > See the subsequent clamp_t() call a few lines down.
> 
> This I don't understand.  If we have for example
> 
>  new_tx = 0
>  max = 0
>  min = 1514(=datagram) + 8(=ndp) + 2(=1+1) * 4(=dpe) + 12(=nth) = 1542
> 
> then
> 
>  max = max(min, max) = 1542
>  val = clamp_t(u32, new_tx, min, max) = 1542
> 
> so we return 1542 and everything is fine.

I don't believe so.

#define clamp_t(type, val, lo, hi) \
              min_t(type, max_t(type, val, lo), hi)

So:
              min_t(u32, max_t(u32, 0, 1542), 0)

So:
	      min_t(u32, 1542, 0) = 0

So we return 0 and everything is not fine. :)

Perhaps we should use max_t() here instead of clamp?

> >> or maybe more readable
> >> 
> >>   if (max < min)
> >>      max = min
> >> 
> >> What do you think?
> >
> > So the data that is added to the SKB is ctx->max_ndp_size, which is
> > allocated in cdc_ncm_init().  The code that does it looks like:
> >
> >    if (ctx->is_ndp16)                                                                                         
> >         ctx->max_ndp_size = sizeof(struct usb_cdc_ncm_ndp16) +
> > 	                    (ctx->tx_max_datagrams + 1) *
> > 			    sizeof(struct usb_cdc_ncm_dpe16);                                                                                               
> >     else                                                                                                       
> >         ctx->max_ndp_size = sizeof(struct usb_cdc_ncm_ndp32) +
> > 	                    (ctx->tx_max_datagrams + 1) *
> > 			    sizeof(struct usb_cdc_ncm_dpe32);  
> >
> > So this should be the size of the allocation too, right?
> 
> This driver doesn't add data to the skb.  It allocates a new buffer and
> copies one or more skbs into it.  I'm sure that could be improved too..

"one or more skbs" == data :)

Either way, it's asking for more bits to be copied in than there is
space for.  It's amazing that this worked at all.  We only noticed it
when we increased the size of one of the SKB headers and some of the
accidentally allocated memory was eaten up.

> Without a complete rewrite we need to allocate new skbs large enough to hold
> 
> NTH          - frame header
> NDP x 1      - index table, with minimum two entries (1 datagram + terminator)
> datagram x 1 - ethernet frame
> 
> This gives the minimum "tx_max" value.
> 
> The device is supposed to tell us the maximum "tx_max" value in
> dwNtbOutMaxSize.  In theory.  In practice we cannot trust the device, as
> you point out.  We know aleady deal with too large values (which are
> commonly seen in real products), but we also need to deal with too low
> values.
> 
> I believe the "too low" is defined by the calculated minimum value, and
> the comment indicates that this what I tried to express but failed.

Right, that's how I read it too.

> > Why would the platform ever need to over-ride this?  The platform
> > can't make the data area smaller since there won't be enough room.  It
> > could perhaps make it bigger, but the min_t() and clamp_t() macros
> > will end up choosing the above allocation anyway.
> >
> > This leaves me feeling a little perplexed.
> >
> > If there isn't a good reason for over-riding then I could simplify
> > cdc_ncm_check_tx_max() greatly.
> >
> > What do *you* think? :)
> 
> I also have the feeling that this could and should be simplified. This
> discussion shows that refactoring is required.

I'm happy to help with the coding, if we agree on a solution.

> git blame makes this all too embarrassing ;-)

:D
Bjørn Mork Dec. 3, 2021, 2:36 p.m. UTC | #5
Lee Jones <lee.jones@linaro.org> writes:
> On Fri, 03 Dec 2021, Bjørn Mork wrote:

>> This I don't understand.  If we have for example
>> 
>>  new_tx = 0
>>  max = 0
>>  min = 1514(=datagram) + 8(=ndp) + 2(=1+1) * 4(=dpe) + 12(=nth) = 1542
>> 
>> then
>> 
>>  max = max(min, max) = 1542
>>  val = clamp_t(u32, new_tx, min, max) = 1542
>> 
>> so we return 1542 and everything is fine.
>
> I don't believe so.
>
> #define clamp_t(type, val, lo, hi) \
>               min_t(type, max_t(type, val, lo), hi)
>
> So:
>               min_t(u32, max_t(u32, 0, 1542), 0)


I don't think so.  If we have:

 new_tx = 0
 max = 0
 min = 1514(=datagram) + 8(=ndp) + 2(=1+1) * 4(=dpe) + 12(=nth) = 1542
 max = max(min, max) = 1542

Then we have

  min_t(u32, max_t(u32, 0, 1542), 1542)


If it wasn't clear - My proposal was to change this:

  - min = min(min, max);
  + max = max(min, max);

in the original code.


But looking further I don't think that's a good idea either.  I searched
through old email and found this commit:

commit a6fe67087d7cb916e41b4ad1b3a57c91150edb88
Author: Bjørn Mork <bjorn@mork.no>
Date:   Fri Nov 1 11:17:01 2013 +0100

    net: cdc_ncm: no not set tx_max higher than the device supports
    
    There are MBIM devices out there reporting
    
      dwNtbInMaxSize=2048 dwNtbOutMaxSize=2048
    
    and since the spec require a datagram max size of at least
    2048, this means that a full sized datagram will never fit.
    
    Still, sending larger NTBs than the device supports is not
    going to help.  We do not have any other options than either
     a) refusing to bindi, or
     b) respect the insanely low value.
    
    Alternative b will at least make these devices work, so go
    for it.
    
    Cc: Alexey Orishko <alexey.orishko@gmail.com>
    Signed-off-by: Bjørn Mork <bjorn@mork.no>
    Signed-off-by: David S. Miller <davem@davemloft.net>

diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
index 4531f38fc0e5..11c703337577 100644
--- a/drivers/net/usb/cdc_ncm.c
+++ b/drivers/net/usb/cdc_ncm.c
@@ -159,8 +159,7 @@ static u8 cdc_ncm_setup(struct usbnet *dev)
        }
 
        /* verify maximum size of transmitted NTB in bytes */
-       if ((ctx->tx_max < (CDC_NCM_MIN_HDR_SIZE + ctx->max_datagram_size)) ||
-           (ctx->tx_max > CDC_NCM_NTB_MAX_SIZE_TX)) {
+       if (ctx->tx_max > CDC_NCM_NTB_MAX_SIZE_TX) {
                dev_dbg(&dev->intf->dev, "Using default maximum transmit length=%d\n",
                        CDC_NCM_NTB_MAX_SIZE_TX);
                ctx->tx_max = CDC_NCM_NTB_MAX_SIZE_TX;





So there are real devices depending on a dwNtbOutMaxSize which is too
low.  Our calculated minimum for MBIM will not fit.

So let's go back your original test for zero.  It's better than
nothing.  I'll just ack that.


> Perhaps we should use max_t() here instead of clamp?

No.  That would allow userspace to set an unlimited buffer size.



Bjørn
Lee Jones Dec. 3, 2021, 2:46 p.m. UTC | #6
On Fri, 03 Dec 2021, Bjørn Mork wrote:

> Lee Jones <lee.jones@linaro.org> writes:
> > On Fri, 03 Dec 2021, Bjørn Mork wrote:
> 
> >> This I don't understand.  If we have for example
> >> 
> >>  new_tx = 0
> >>  max = 0
> >>  min = 1514(=datagram) + 8(=ndp) + 2(=1+1) * 4(=dpe) + 12(=nth) = 1542
> >> 
> >> then
> >> 
> >>  max = max(min, max) = 1542
> >>  val = clamp_t(u32, new_tx, min, max) = 1542
> >> 
> >> so we return 1542 and everything is fine.
> >
> > I don't believe so.
> >
> > #define clamp_t(type, val, lo, hi) \
> >               min_t(type, max_t(type, val, lo), hi)
> >
> > So:
> >               min_t(u32, max_t(u32, 0, 1542), 0)
> 
> 
> I don't think so.  If we have:
> 
>  new_tx = 0
>  max = 0
>  min = 1514(=datagram) + 8(=ndp) + 2(=1+1) * 4(=dpe) + 12(=nth) = 1542
>  max = max(min, max) = 1542
> 
> Then we have
> 
>   min_t(u32, max_t(u32, 0, 1542), 1542)
> 
> 
> If it wasn't clear - My proposal was to change this:
> 
>   - min = min(min, max);
>   + max = max(min, max);
> 
> in the original code.

Oh, I see.  Yes, I missed the reallocation of 'max'.

I thought we were using original values and just changing min() to max().

> But looking further I don't think that's a good idea either.  I searched
> through old email and found this commit:
> 
> commit a6fe67087d7cb916e41b4ad1b3a57c91150edb88
> Author: Bjørn Mork <bjorn@mork.no>
> Date:   Fri Nov 1 11:17:01 2013 +0100
> 
>     net: cdc_ncm: no not set tx_max higher than the device supports
>     
>     There are MBIM devices out there reporting
>     
>       dwNtbInMaxSize=2048 dwNtbOutMaxSize=2048
>     
>     and since the spec require a datagram max size of at least
>     2048, this means that a full sized datagram will never fit.
>     
>     Still, sending larger NTBs than the device supports is not
>     going to help.  We do not have any other options than either
>      a) refusing to bindi, or
>      b) respect the insanely low value.
>     
>     Alternative b will at least make these devices work, so go
>     for it.
>     
>     Cc: Alexey Orishko <alexey.orishko@gmail.com>
>     Signed-off-by: Bjørn Mork <bjorn@mork.no>
>     Signed-off-by: David S. Miller <davem@davemloft.net>
> 
> diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
> index 4531f38fc0e5..11c703337577 100644
> --- a/drivers/net/usb/cdc_ncm.c
> +++ b/drivers/net/usb/cdc_ncm.c
> @@ -159,8 +159,7 @@ static u8 cdc_ncm_setup(struct usbnet *dev)
>         }
>  
>         /* verify maximum size of transmitted NTB in bytes */
> -       if ((ctx->tx_max < (CDC_NCM_MIN_HDR_SIZE + ctx->max_datagram_size)) ||
> -           (ctx->tx_max > CDC_NCM_NTB_MAX_SIZE_TX)) {
> +       if (ctx->tx_max > CDC_NCM_NTB_MAX_SIZE_TX) {
>                 dev_dbg(&dev->intf->dev, "Using default maximum transmit length=%d\n",
>                         CDC_NCM_NTB_MAX_SIZE_TX);
>                 ctx->tx_max = CDC_NCM_NTB_MAX_SIZE_TX;
> 
> 
> 
> 
> 
> So there are real devices depending on a dwNtbOutMaxSize which is too
> low.  Our calculated minimum for MBIM will not fit.
> 
> So let's go back your original test for zero.  It's better than
> nothing.  I'll just ack that.

Sure, no problem.

Thanks for conversing with me.

> > Perhaps we should use max_t() here instead of clamp?
> 
> No.  That would allow userspace to set an unlimited buffer size.

Right, I see.
Bjørn Mork Dec. 3, 2021, 2:52 p.m. UTC | #7
Lee Jones <lee.jones@linaro.org> writes:

> diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
> index 24753a4da7e60..e303b522efb50 100644
> --- a/drivers/net/usb/cdc_ncm.c
> +++ b/drivers/net/usb/cdc_ncm.c
> @@ -181,6 +181,8 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
>  		min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
>  
>  	max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
> +	if (max == 0)
> +		max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
>  
>  	/* some devices set dwNtbOutMaxSize too low for the above default */
>  	min = min(min, max);

I believe this is the best possible fix, considering the regressions
anything stricter might cause.

We know of at least one MBIM device where dwNtbOutMaxSize is as low as
2048.

According to the MBIM spec, the minimum and default value for
wMaxSegmentSize is also 2048.  This implies that the calculated "min"
value is at least 2076, which is why we need that odd looking

  min = min(min, max);

So let's just fix this specific zero case without breaking the
non-conforming devices.


Reviewed-by: Bjørn Mork <bjorn@mork.no>
Jakub Kicinski Dec. 4, 2021, 12:57 a.m. UTC | #8
On Fri, 03 Dec 2021 15:52:48 +0100 Bjørn Mork wrote:
> Lee Jones <lee.jones@linaro.org> writes:
> 
> > diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
> > index 24753a4da7e60..e303b522efb50 100644
> > --- a/drivers/net/usb/cdc_ncm.c
> > +++ b/drivers/net/usb/cdc_ncm.c
> > @@ -181,6 +181,8 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
> >  		min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
> >  
> >  	max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
> > +	if (max == 0)
> > +		max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
> >  
> >  	/* some devices set dwNtbOutMaxSize too low for the above default */
> >  	min = min(min, max);  
> 
> I believe this is the best possible fix, considering the regressions
> anything stricter might cause.
> 
> We know of at least one MBIM device where dwNtbOutMaxSize is as low as
> 2048.
> 
> According to the MBIM spec, the minimum and default value for
> wMaxSegmentSize is also 2048.  This implies that the calculated "min"
> value is at least 2076, which is why we need that odd looking
> 
>   min = min(min, max);
> 
> So let's just fix this specific zero case without breaking the
> non-conforming devices.
> 
> 
> Reviewed-by: Bjørn Mork <bjorn@mork.no>

Applied to net, thanks!
diff mbox series

Patch

diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
index 24753a4da7e60..e303b522efb50 100644
--- a/drivers/net/usb/cdc_ncm.c
+++ b/drivers/net/usb/cdc_ncm.c
@@ -181,6 +181,8 @@  static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx)
 		min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);
 
 	max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));
+	if (max == 0)
+		max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */
 
 	/* some devices set dwNtbOutMaxSize too low for the above default */
 	min = min(min, max);