diff mbox

[1/3] staging: ozwpan: Remove redundant null check before kfree in ozpd.c

Message ID 1353411610-3401-1-git-send-email-sachin.kamat@linaro.org
State Accepted
Headers show

Commit Message

Sachin Kamat Nov. 20, 2012, 11:40 a.m. UTC
kfree on NULL pointer is a no-op.

Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
---
 drivers/staging/ozwpan/ozpd.c |    6 ++----
 1 files changed, 2 insertions(+), 4 deletions(-)

Comments

Rupesh Gujare Nov. 20, 2012, 12:09 p.m. UTC | #1
On 20/11/12 11:40, Sachin Kamat wrote:
> kfree on NULL pointer is a no-op.
>
> Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
> ---
>   drivers/staging/ozwpan/ozpd.c |    6 ++----
>   1 files changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/staging/ozwpan/ozpd.c b/drivers/staging/ozwpan/ozpd.c
> index 0b3648c..118a4db 100644
> --- a/drivers/staging/ozwpan/ozpd.c
> +++ b/drivers/staging/ozwpan/ozpd.c
> @@ -402,8 +402,7 @@ static void oz_tx_frame_free(struct oz_pd *pd, struct oz_tx_frame *f)
>   		f = 0;
>   	}
>   	spin_unlock_bh(&pd->tx_frame_lock);
> -	if (f)
> -		kfree(f);
> +	kfree(f);
>   }
>   /*------------------------------------------------------------------------------
>    * Context: softirq-serialized
> @@ -737,8 +736,7 @@ int oz_isoc_stream_create(struct oz_pd *pd, u8 ep_num)
>   		st = 0;
>   	}
>   	spin_unlock_bh(&pd->stream_lock);
> -	if (st)
> -		kfree(st);
> +	kfree(st);
>   	return 0;
>   }
>   /*------------------------------------------------------------------------------

Thanks Sachin. However I don't think I can take this patch series for 
following reasons :-

1. Removing pointer check confuses reader about logic flow.
2. We can save few CPU cycles by checking NULL pointer here, rather than 
relying on kfree().
Dan Carpenter Nov. 20, 2012, 2:16 p.m. UTC | #2
On Tue, Nov 20, 2012 at 12:09:57PM +0000, Rupesh Gujare wrote:
> 2. We can save few CPU cycles by checking NULL pointer here, rather
> than relying on kfree().

I'm not sure about that.

But more importantly does any of the memory pools code actually make
a difference in benchmarks?  What is the difference between running
with OZ_MAX_TX_POOL_SIZE as zero and it set to 6?  We have this
very complicated code with spinlocks and linked lists to save six
56 byte allocations...  It's silly.  :P  Just use kmalloc().

regards,
dan carpenter
Rupesh Gujare Nov. 20, 2012, 4:18 p.m. UTC | #3
On 20/11/12 14:16, Dan Carpenter wrote:
> But more importantly does any of the memory pools code actually make a 
> difference in benchmarks? What is the difference between running with 
> OZ_MAX_TX_POOL_SIZE as zero and it set to 6? 

Agree. Thats is a good test to look for.

> We have this very complicated code with spinlocks and linked lists to 
> save six 56 byte allocations... It's silly. :P Just use kmalloc(). 

Probably we will need to kmalloc() & kfree() few hundred times every 
second when data transfer is taking places. By maintaining a pool of 6, 
we avoid calling kmalloc frequently.  Although I agree that some 
benchmarking is required here to verify that we actually get some 
advantage with current implementation.
Greg Kroah-Hartman Nov. 20, 2012, 4:35 p.m. UTC | #4
On Tue, Nov 20, 2012 at 12:09:57PM +0000, Rupesh Gujare wrote:
> On 20/11/12 11:40, Sachin Kamat wrote:
> >kfree on NULL pointer is a no-op.
> >
> >Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
> >---
> >  drivers/staging/ozwpan/ozpd.c |    6 ++----
> >  1 files changed, 2 insertions(+), 4 deletions(-)
> >
> >diff --git a/drivers/staging/ozwpan/ozpd.c b/drivers/staging/ozwpan/ozpd.c
> >index 0b3648c..118a4db 100644
> >--- a/drivers/staging/ozwpan/ozpd.c
> >+++ b/drivers/staging/ozwpan/ozpd.c
> >@@ -402,8 +402,7 @@ static void oz_tx_frame_free(struct oz_pd *pd, struct oz_tx_frame *f)
> >  		f = 0;
> >  	}
> >  	spin_unlock_bh(&pd->tx_frame_lock);
> >-	if (f)
> >-		kfree(f);
> >+	kfree(f);
> >  }
> >  /*------------------------------------------------------------------------------
> >   * Context: softirq-serialized
> >@@ -737,8 +736,7 @@ int oz_isoc_stream_create(struct oz_pd *pd, u8 ep_num)
> >  		st = 0;
> >  	}
> >  	spin_unlock_bh(&pd->stream_lock);
> >-	if (st)
> >-		kfree(st);
> >+	kfree(st);
> >  	return 0;
> >  }
> >  /*------------------------------------------------------------------------------
> 
> Thanks Sachin. However I don't think I can take this patch series
> for following reasons :-
> 
> 1. Removing pointer check confuses reader about logic flow.

No it doesn't, this is a normal kernel pattern.

> 2. We can save few CPU cycles by checking NULL pointer here, rather
> than relying on kfree().

Nope, you shouldn't have to worry about this.  If this is a fast-path,
then you shouldn't be using kfree() in the first place :)

thanks,

greg k-h
Dan Carpenter Nov. 21, 2012, 1:09 p.m. UTC | #5
On Tue, Nov 20, 2012 at 04:18:06PM +0000, Rupesh Gujare wrote:
> On 20/11/12 14:16, Dan Carpenter wrote:
> >But more importantly does any of the memory pools code actually
> >make a difference in benchmarks? What is the difference between
> >running with OZ_MAX_TX_POOL_SIZE as zero and it set to 6?
> 
> Agree. Thats is a good test to look for.
> 
> >We have this very complicated code with spinlocks and linked lists
> >to save six 56 byte allocations... It's silly. :P Just use
> >kmalloc().
> 
> Probably we will need to kmalloc() & kfree() few hundred times every
> second when data transfer is taking places. By maintaining a pool of
> 6, we avoid calling kmalloc frequently.  Although I agree that some
> benchmarking is required here to verify that we actually get some
> advantage with current implementation.
> 

The kernel already provides a way to handle this.  Look at
kmem_cache_create().

regards,
dan carpenter
Rupesh Gujare Nov. 21, 2012, 2:41 p.m. UTC | #6
On 21/11/12 13:09, Dan Carpenter wrote:
> On Tue, Nov 20, 2012 at 04:18:06PM +0000, Rupesh Gujare wrote:
>> On 20/11/12 14:16, Dan Carpenter wrote:
>>> But more importantly does any of the memory pools code actually
>>> make a difference in benchmarks? What is the difference between
>>> running with OZ_MAX_TX_POOL_SIZE as zero and it set to 6?
>> Agree. Thats is a good test to look for.
>>
>>> We have this very complicated code with spinlocks and linked lists
>>> to save six 56 byte allocations... It's silly. :P Just use
>>> kmalloc().
>> Probably we will need to kmalloc() & kfree() few hundred times every
>> second when data transfer is taking places. By maintaining a pool of
>> 6, we avoid calling kmalloc frequently.  Although I agree that some
>> benchmarking is required here to verify that we actually get some
>> advantage with current implementation.
>>
> The kernel already provides a way to handle this.  Look at
> kmem_cache_create().
>
>
>

Thanks Dan,

I will have a look at it.
diff mbox

Patch

diff --git a/drivers/staging/ozwpan/ozpd.c b/drivers/staging/ozwpan/ozpd.c
index 0b3648c..118a4db 100644
--- a/drivers/staging/ozwpan/ozpd.c
+++ b/drivers/staging/ozwpan/ozpd.c
@@ -402,8 +402,7 @@  static void oz_tx_frame_free(struct oz_pd *pd, struct oz_tx_frame *f)
 		f = 0;
 	}
 	spin_unlock_bh(&pd->tx_frame_lock);
-	if (f)
-		kfree(f);
+	kfree(f);
 }
 /*------------------------------------------------------------------------------
  * Context: softirq-serialized
@@ -737,8 +736,7 @@  int oz_isoc_stream_create(struct oz_pd *pd, u8 ep_num)
 		st = 0;
 	}
 	spin_unlock_bh(&pd->stream_lock);
-	if (st)
-		kfree(st);
+	kfree(st);
 	return 0;
 }
 /*------------------------------------------------------------------------------