diff mbox

usb: dwc3: gadget: Fix BUG in RT config

Message ID 1442822916-26238-1-git-send-email-rogerq@ti.com
State Accepted
Commit a66c275b3d5d8467d770dabd30927f5d5e857294
Headers show

Commit Message

Roger Quadros Sept. 21, 2015, 8:08 a.m. UTC
Using spin_lock() in hard irq handler is pointless
and causes a BUG() in RT (real-time) configuration
so get rid of it.

Signed-off-by: Roger Quadros <rogerq@ti.com>
---
 drivers/usb/dwc3/gadget.c | 4 ----
 1 file changed, 4 deletions(-)

Comments

Alan Stern Sept. 21, 2015, 2:31 p.m. UTC | #1
On Mon, 21 Sep 2015, Roger Quadros wrote:

> Using spin_lock() in hard irq handler is pointless
> and causes a BUG() in RT (real-time) configuration
> so get rid of it.

Wait a minute.  Who says spin_lock is pointless in an IRQ handler?
And who says it causes a BUG in RT configurations?

And if those claims are true, why do we have spin_lock in IRQ handlers 
for various host controller drivers?

Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Felipe Balbi Sept. 21, 2015, 2:40 p.m. UTC | #2
On Mon, Sep 21, 2015 at 10:31:15AM -0400, Alan Stern wrote:
> On Mon, 21 Sep 2015, Roger Quadros wrote:
> 
> > Using spin_lock() in hard irq handler is pointless
> > and causes a BUG() in RT (real-time) configuration
> > so get rid of it.
> 
> Wait a minute.  Who says spin_lock is pointless in an IRQ handler?

in the top half IRQs are already disabled, how can this race ?

> And who says it causes a BUG in RT configurations?

spin_locks are reimplemented as RT-aware mutexes when the RT patch is applied.

> And if those claims are true, why do we have spin_lock in IRQ handlers 
> for various host controller drivers?

because they might be wrong too ? :-p Or I'm missing something, but if IRQs
are disabled in top half, how can this race ? Unless you're running in a
thread without IRQF_ONESHOT.
Alan Stern Sept. 21, 2015, 2:50 p.m. UTC | #3
On Mon, 21 Sep 2015, Felipe Balbi wrote:

> On Mon, Sep 21, 2015 at 10:31:15AM -0400, Alan Stern wrote:
> > On Mon, 21 Sep 2015, Roger Quadros wrote:
> > 
> > > Using spin_lock() in hard irq handler is pointless
> > > and causes a BUG() in RT (real-time) configuration
> > > so get rid of it.
> > 
> > Wait a minute.  Who says spin_lock is pointless in an IRQ handler?
> 
> in the top half IRQs are already disabled, how can this race ?

It can race with code running on a different CPU.

> > And who says it causes a BUG in RT configurations?
> 
> spin_locks are reimplemented as RT-aware mutexes when the RT patch is applied.
> 
> > And if those claims are true, why do we have spin_lock in IRQ handlers 
> > for various host controller drivers?
> 
> because they might be wrong too ? :-p Or I'm missing something, but if IRQs
> are disabled in top half, how can this race ? Unless you're running in a
> thread without IRQF_ONESHOT.

I just looked more closely.  The difference is that dwc3 uses a 
threaded interrupt handler whereas those other drivers don't.

Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Felipe Balbi Sept. 21, 2015, 2:54 p.m. UTC | #4
On Mon, Sep 21, 2015 at 10:50:10AM -0400, Alan Stern wrote:
> On Mon, 21 Sep 2015, Felipe Balbi wrote:
> 
> > On Mon, Sep 21, 2015 at 10:31:15AM -0400, Alan Stern wrote:
> > > On Mon, 21 Sep 2015, Roger Quadros wrote:
> > > 
> > > > Using spin_lock() in hard irq handler is pointless
> > > > and causes a BUG() in RT (real-time) configuration
> > > > so get rid of it.
> > > 
> > > Wait a minute.  Who says spin_lock is pointless in an IRQ handler?
> > 
> > in the top half IRQs are already disabled, how can this race ?
> 
> It can race with code running on a different CPU.

fair point.

> > > And who says it causes a BUG in RT configurations?
> > 
> > spin_locks are reimplemented as RT-aware mutexes when the RT patch is applied.
> > 
> > > And if those claims are true, why do we have spin_lock in IRQ handlers 
> > > for various host controller drivers?
> > 
> > because they might be wrong too ? :-p Or I'm missing something, but if IRQs
> > are disabled in top half, how can this race ? Unless you're running in a
> > thread without IRQF_ONESHOT.
> 
> I just looked more closely.  The difference is that dwc3 uses a 
> threaded interrupt handler whereas those other drivers don't.

yeah, and we make sure to mask dwc3's interrupts before returning from
the top half.
Alan Stern Sept. 21, 2015, 3:37 p.m. UTC | #5
On Mon, 21 Sep 2015, Felipe Balbi wrote:

> On Mon, Sep 21, 2015 at 10:50:10AM -0400, Alan Stern wrote:
> > On Mon, 21 Sep 2015, Felipe Balbi wrote:
> > 
> > > On Mon, Sep 21, 2015 at 10:31:15AM -0400, Alan Stern wrote:
> > > > On Mon, 21 Sep 2015, Roger Quadros wrote:
> > > > 
> > > > > Using spin_lock() in hard irq handler is pointless
> > > > > and causes a BUG() in RT (real-time) configuration
> > > > > so get rid of it.
> > > > 
> > > > Wait a minute.  Who says spin_lock is pointless in an IRQ handler?
> > > 
> > > in the top half IRQs are already disabled, how can this race ?
> > 
> > It can race with code running on a different CPU.
> 
> fair point.

In fact, isn't this the main purpose of spinlocks?  There's no point
using a spinlock to protect against races occurring on a single CPU,
because (in non-RT situations) the kernel can't schedule or preempt
while a spinlock is held, so no race can occur.  The whole idea of
spinlocks is to protect against cross-CPU races.

Now, maybe the spinlock usage that Roger is removing really is
unnecessary in this top-half handler.  I don't know; I haven't looked
at the code.  But in general, spinlocks are highly necessary in IRQ
handlers.

Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Felipe Balbi Sept. 21, 2015, 3:50 p.m. UTC | #6
On Mon, Sep 21, 2015 at 11:37:47AM -0400, Alan Stern wrote:
> On Mon, 21 Sep 2015, Felipe Balbi wrote:
> 
> > On Mon, Sep 21, 2015 at 10:50:10AM -0400, Alan Stern wrote:
> > > On Mon, 21 Sep 2015, Felipe Balbi wrote:
> > > 
> > > > On Mon, Sep 21, 2015 at 10:31:15AM -0400, Alan Stern wrote:
> > > > > On Mon, 21 Sep 2015, Roger Quadros wrote:
> > > > > 
> > > > > > Using spin_lock() in hard irq handler is pointless
> > > > > > and causes a BUG() in RT (real-time) configuration
> > > > > > so get rid of it.
> > > > > 
> > > > > Wait a minute.  Who says spin_lock is pointless in an IRQ handler?
> > > > 
> > > > in the top half IRQs are already disabled, how can this race ?
> > > 
> > > It can race with code running on a different CPU.
> > 
> > fair point.
> 
> In fact, isn't this the main purpose of spinlocks?  There's no point
> using a spinlock to protect against races occurring on a single CPU,
> because (in non-RT situations) the kernel can't schedule or preempt
> while a spinlock is held, so no race can occur.  The whole idea of
> spinlocks is to protect against cross-CPU races.
> 
> Now, maybe the spinlock usage that Roger is removing really is
> unnecessary in this top-half handler.  I don't know; I haven't looked
> at the code.  But in general, spinlocks are highly necessary in IRQ
> handlers.

this does raise some questions of what to do then ? We can't use
spin_locks when in RT because they're reimplemented as mutexes
and can sleep. Converting every single driver to raw_spin_locks
defeats the purpose of having everything and their dogs preemptable.

Difficult to say. Let's defer this patch until we can come up
with a good answer to this.
Alan Stern Sept. 21, 2015, 4:27 p.m. UTC | #7
On Mon, 21 Sep 2015, Felipe Balbi wrote:

> On Mon, Sep 21, 2015 at 11:37:47AM -0400, Alan Stern wrote:
> > On Mon, 21 Sep 2015, Felipe Balbi wrote:
> > 
> > > On Mon, Sep 21, 2015 at 10:50:10AM -0400, Alan Stern wrote:
> > > > On Mon, 21 Sep 2015, Felipe Balbi wrote:
> > > > 
> > > > > On Mon, Sep 21, 2015 at 10:31:15AM -0400, Alan Stern wrote:
> > > > > > On Mon, 21 Sep 2015, Roger Quadros wrote:
> > > > > > 
> > > > > > > Using spin_lock() in hard irq handler is pointless
> > > > > > > and causes a BUG() in RT (real-time) configuration
> > > > > > > so get rid of it.
> > > > > > 
> > > > > > Wait a minute.  Who says spin_lock is pointless in an IRQ handler?
> > > > > 
> > > > > in the top half IRQs are already disabled, how can this race ?
> > > > 
> > > > It can race with code running on a different CPU.
> > > 
> > > fair point.
> > 
> > In fact, isn't this the main purpose of spinlocks?  There's no point
> > using a spinlock to protect against races occurring on a single CPU,
> > because (in non-RT situations) the kernel can't schedule or preempt
> > while a spinlock is held, so no race can occur.  The whole idea of
> > spinlocks is to protect against cross-CPU races.
> > 
> > Now, maybe the spinlock usage that Roger is removing really is
> > unnecessary in this top-half handler.  I don't know; I haven't looked
> > at the code.  But in general, spinlocks are highly necessary in IRQ
> > handlers.
> 
> this does raise some questions of what to do then ? We can't use
> spin_locks when in RT because they're reimplemented as mutexes
> and can sleep. Converting every single driver to raw_spin_locks
> defeats the purpose of having everything and their dogs preemptable.
> 
> Difficult to say. Let's defer this patch until we can come up
> with a good answer to this.

Someone more familiar with the -RT patches should be able to answer
these questions.
  
Steve, any advice?
  
Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Cohen Sept. 21, 2015, 4:46 p.m. UTC | #8
On September 21, 2015 9:27:43 AM PDT, Alan Stern <stern@rowland.harvard.edu> wrote:
>On Mon, 21 Sep 2015, Felipe Balbi wrote:
>
>> On Mon, Sep 21, 2015 at 11:37:47AM -0400, Alan Stern wrote:
>> > On Mon, 21 Sep 2015, Felipe Balbi wrote:
>> > 
>> > > On Mon, Sep 21, 2015 at 10:50:10AM -0400, Alan Stern wrote:
>> > > > On Mon, 21 Sep 2015, Felipe Balbi wrote:
>> > > > 
>> > > > > On Mon, Sep 21, 2015 at 10:31:15AM -0400, Alan Stern wrote:
>> > > > > > On Mon, 21 Sep 2015, Roger Quadros wrote:
>> > > > > > 
>> > > > > > > Using spin_lock() in hard irq handler is pointless
>> > > > > > > and causes a BUG() in RT (real-time) configuration
>> > > > > > > so get rid of it.
>> > > > > > 
>> > > > > > Wait a minute.  Who says spin_lock is pointless in an IRQ
>handler?
>> > > > > 
>> > > > > in the top half IRQs are already disabled, how can this race
>?
>> > > > 
>> > > > It can race with code running on a different CPU.
>> > > 
>> > > fair point.
>> > 
>> > In fact, isn't this the main purpose of spinlocks?  There's no
>point
>> > using a spinlock to protect against races occurring on a single
>CPU,
>> > because (in non-RT situations) the kernel can't schedule or preempt
>> > while a spinlock is held, so no race can occur.  The whole idea of
>> > spinlocks is to protect against cross-CPU races.
>> > 
>> > Now, maybe the spinlock usage that Roger is removing really is
>> > unnecessary in this top-half handler.  I don't know; I haven't
>looked
>> > at the code.  But in general, spinlocks are highly necessary in IRQ
>> > handlers.
>> 
>> this does raise some questions of what to do then ? We can't use
>> spin_locks when in RT because they're reimplemented as mutexes
>> and can sleep. Converting every single driver to raw_spin_locks
>> defeats the purpose of having everything and their dogs preemptable.
>> 
>> Difficult to say. Let's defer this patch until we can come up
>> with a good answer to this.
>
>Someone more familiar with the -RT patches should be able to answer
>these questions.

Spin lock (without irqsave) may have a bug if the irq handler spin locks the
same one, and the irqs aren't already disabled. 
E.g. any context (atomic or not) spin locks, then the irq handler takes place on
*same* cpu and tries to spin lock again. It will be a deadlock.

I believe that's what Roger misinterpreted with his patch description.

Br, David Cohen

>  
>Steve, any advice?
>  
>Alan Stern
>
>--
>To unsubscribe from this list: send the line "unsubscribe linux-usb" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

Hi,
Felipe Balbi Sept. 21, 2015, 5:04 p.m. UTC | #9
Hi,

On Mon, Sep 21, 2015 at 09:46:30AM -0700, David Cohen wrote:
> On September 21, 2015 9:27:43 AM PDT, Alan Stern <stern@rowland.harvard.edu> wrote:
> >On Mon, 21 Sep 2015, Felipe Balbi wrote:
> >
> >> On Mon, Sep 21, 2015 at 11:37:47AM -0400, Alan Stern wrote:
> >> > On Mon, 21 Sep 2015, Felipe Balbi wrote:
> >> > 
> >> > > On Mon, Sep 21, 2015 at 10:50:10AM -0400, Alan Stern wrote:
> >> > > > On Mon, 21 Sep 2015, Felipe Balbi wrote:
> >> > > > 
> >> > > > > On Mon, Sep 21, 2015 at 10:31:15AM -0400, Alan Stern wrote:
> >> > > > > > On Mon, 21 Sep 2015, Roger Quadros wrote:
> >> > > > > > 
> >> > > > > > > Using spin_lock() in hard irq handler is pointless
> >> > > > > > > and causes a BUG() in RT (real-time) configuration
> >> > > > > > > so get rid of it.
> >> > > > > > 
> >> > > > > > Wait a minute.  Who says spin_lock is pointless in an IRQ
> >handler?
> >> > > > > 
> >> > > > > in the top half IRQs are already disabled, how can this race
> >?
> >> > > > 
> >> > > > It can race with code running on a different CPU.
> >> > > 
> >> > > fair point.
> >> > 
> >> > In fact, isn't this the main purpose of spinlocks?  There's no
> >point
> >> > using a spinlock to protect against races occurring on a single
> >CPU,
> >> > because (in non-RT situations) the kernel can't schedule or preempt
> >> > while a spinlock is held, so no race can occur.  The whole idea of
> >> > spinlocks is to protect against cross-CPU races.
> >> > 
> >> > Now, maybe the spinlock usage that Roger is removing really is
> >> > unnecessary in this top-half handler.  I don't know; I haven't
> >looked
> >> > at the code.  But in general, spinlocks are highly necessary in IRQ
> >> > handlers.
> >> 
> >> this does raise some questions of what to do then ? We can't use
> >> spin_locks when in RT because they're reimplemented as mutexes
> >> and can sleep. Converting every single driver to raw_spin_locks
> >> defeats the purpose of having everything and their dogs preemptable.
> >> 
> >> Difficult to say. Let's defer this patch until we can come up
> >> with a good answer to this.
> >
> >Someone more familiar with the -RT patches should be able to answer
> >these questions.
> 
> Spin lock (without irqsave) may have a bug if the irq handler spin locks the
> same one, and the irqs aren't already disabled. 
> E.g. any context (atomic or not) spin locks, then the irq handler takes place on
> *same* cpu and tries to spin lock again. It will be a deadlock.
> 
> I believe that's what Roger misinterpreted with his patch description.

no, no. you're missing the point here. The problem is that when RT
is applied, spinlocks get reimplemented as RT-aware mutexes which
works pretty well as long as you don't install your own top and bottom
halves. If you do, RT patch can't force your handler to run as a thread
and you're left with, essentially, a mutex for synchronization in
hardirq context.

So the question is really for Steve, what should we do here ? Use a
raw_spinlock ? I'd like to avoid that if possible. If we have another
option, I'm all ears.
Steven Rostedt Sept. 21, 2015, 7:01 p.m. UTC | #10
On Mon, 21 Sep 2015 12:04:16 -0500
Felipe Balbi <balbi@ti.com> wrote:


> no, no. you're missing the point here. The problem is that when RT
> is applied, spinlocks get reimplemented as RT-aware mutexes which
> works pretty well as long as you don't install your own top and bottom
> halves. If you do, RT patch can't force your handler to run as a thread
> and you're left with, essentially, a mutex for synchronization in
> hardirq context.
> 
> So the question is really for Steve, what should we do here ? Use a
> raw_spinlock ? I'd like to avoid that if possible. If we have another
> option, I'm all ears.
> 

I may be missing something here. Yes, in RT spinlocks become rtmutexes
and can sleep. But in RT all interrupts become threads. With the slight
exception that if you declare your interrupt handle as a thread, then
you do have a top half handler. This top half should not be grabbing
any spin locks, and should simply disable any more interrupts from
happening on the device until the thread handler can run.

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Felipe Balbi Sept. 21, 2015, 7:09 p.m. UTC | #11
On Mon, Sep 21, 2015 at 03:01:42PM -0400, Steven Rostedt wrote:
> On Mon, 21 Sep 2015 12:04:16 -0500
> Felipe Balbi <balbi@ti.com> wrote:
> 
> 
> > no, no. you're missing the point here. The problem is that when RT
> > is applied, spinlocks get reimplemented as RT-aware mutexes which
> > works pretty well as long as you don't install your own top and bottom
> > halves. If you do, RT patch can't force your handler to run as a thread
> > and you're left with, essentially, a mutex for synchronization in
> > hardirq context.
> > 
> > So the question is really for Steve, what should we do here ? Use a
> > raw_spinlock ? I'd like to avoid that if possible. If we have another
> > option, I'm all ears.
> > 
> 
> I may be missing something here. Yes, in RT spinlocks become rtmutexes
> and can sleep. But in RT all interrupts become threads. With the slight
> exception that if you declare your interrupt handle as a thread, then
> you do have a top half handler. This top half should not be grabbing
> any spin locks, and should simply disable any more interrupts from
> happening on the device until the thread handler can run.

and that's basically we're doing. $SUBJECT is removing the spin lock for
that reason. Register accesses should be satomic anyway, right??
Alan Stern Sept. 21, 2015, 7:48 p.m. UTC | #12
On Mon, 21 Sep 2015, Felipe Balbi wrote:

> On Mon, Sep 21, 2015 at 03:01:42PM -0400, Steven Rostedt wrote:
> > On Mon, 21 Sep 2015 12:04:16 -0500
> > Felipe Balbi <balbi@ti.com> wrote:
> > 
> > 
> > > no, no. you're missing the point here. The problem is that when RT
> > > is applied, spinlocks get reimplemented as RT-aware mutexes which
> > > works pretty well as long as you don't install your own top and bottom
> > > halves. If you do, RT patch can't force your handler to run as a thread
> > > and you're left with, essentially, a mutex for synchronization in
> > > hardirq context.
> > > 
> > > So the question is really for Steve, what should we do here ? Use a
> > > raw_spinlock ? I'd like to avoid that if possible. If we have another
> > > option, I'm all ears.
> > > 
> > 
> > I may be missing something here. Yes, in RT spinlocks become rtmutexes
> > and can sleep. But in RT all interrupts become threads. With the slight
> > exception that if you declare your interrupt handle as a thread, then
> > you do have a top half handler. This top half should not be grabbing
> > any spin locks, and should simply disable any more interrupts from
> > happening on the device until the thread handler can run.
> 
> and that's basically we're doing. $SUBJECT is removing the spin lock for
> that reason. Register accesses should be satomic anyway, right??

I guess the patch is the right thing to do.  But the description sure
could be improved.  Roger wrote: "Using spin_lock() in hard irq handler
is pointless and causes a BUG() in RT (real-time) configuration",
which is ungrammatical as well as misleading.

It should say something more along the lines of:

	Top-half handlers for threaded interrupts are not supposed to 
	acquire any locks.  None are needed because the top half is 
	guaranteed to be mutually exclusive with the bottom-half 
	thread handler.  Furthermore, spin_lock() in a top-half handler
	causes a BUG() in RT-enabled kernels.

	Remove the unnecessary spin_lock() in dwc3_interrupt().

Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Steven Rostedt Sept. 21, 2015, 8 p.m. UTC | #13
On Mon, 21 Sep 2015 15:48:53 -0400 (EDT)
Alan Stern <stern@rowland.harvard.edu> wrote:

> It should say something more along the lines of:
> 
> 	Top-half handlers for threaded interrupts are not supposed to 
> 	acquire any locks.  None are needed because the top half is 
> 	guaranteed to be mutually exclusive with the bottom-half 
> 	thread handler.  Furthermore, spin_lock() in a top-half handler
> 	causes a BUG() in RT-enabled kernels.
> 
> 	Remove the unnecessary spin_lock() in dwc3_interrupt().

Ack!

-- Steve

> 
> Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Felipe Balbi Sept. 21, 2015, 9:02 p.m. UTC | #14
On Mon, Sep 21, 2015 at 03:48:53PM -0400, Alan Stern wrote:
> On Mon, 21 Sep 2015, Felipe Balbi wrote:
> 
> > On Mon, Sep 21, 2015 at 03:01:42PM -0400, Steven Rostedt wrote:
> > > On Mon, 21 Sep 2015 12:04:16 -0500
> > > Felipe Balbi <balbi@ti.com> wrote:
> > > 
> > > 
> > > > no, no. you're missing the point here. The problem is that when RT
> > > > is applied, spinlocks get reimplemented as RT-aware mutexes which
> > > > works pretty well as long as you don't install your own top and bottom
> > > > halves. If you do, RT patch can't force your handler to run as a thread
> > > > and you're left with, essentially, a mutex for synchronization in
> > > > hardirq context.
> > > > 
> > > > So the question is really for Steve, what should we do here ? Use a
> > > > raw_spinlock ? I'd like to avoid that if possible. If we have another
> > > > option, I'm all ears.
> > > > 
> > > 
> > > I may be missing something here. Yes, in RT spinlocks become rtmutexes
> > > and can sleep. But in RT all interrupts become threads. With the slight
> > > exception that if you declare your interrupt handle as a thread, then
> > > you do have a top half handler. This top half should not be grabbing
> > > any spin locks, and should simply disable any more interrupts from
> > > happening on the device until the thread handler can run.
> > 
> > and that's basically we're doing. $SUBJECT is removing the spin lock for
> > that reason. Register accesses should be satomic anyway, right??
> 
> I guess the patch is the right thing to do.  But the description sure
> could be improved.  Roger wrote: "Using spin_lock() in hard irq handler
> is pointless and causes a BUG() in RT (real-time) configuration",
> which is ungrammatical as well as misleading.
> 
> It should say something more along the lines of:
> 
> 	Top-half handlers for threaded interrupts are not supposed to 
> 	acquire any locks.  None are needed because the top half is 
> 	guaranteed to be mutually exclusive with the bottom-half 
> 	thread handler.  Furthermore, spin_lock() in a top-half handler
> 	causes a BUG() in RT-enabled kernels.
> 
> 	Remove the unnecessary spin_lock() in dwc3_interrupt().

I kept Roger's commit log and just added one extra paragraph:

Author: Roger Quadros <rogerq@ti.com>
Date:   Mon Sep 21 11:08:36 2015 +0300

    usb: dwc3: gadget: Fix BUG in RT config
    
    Using spin_lock() in hard irq handler is pointless
    and causes a BUG() in RT (real-time) configuration
    so get rid of it.
    
    The reason it's pointless is because the driver is
    basically accessing register which is, anyways,
    atomic.
    
    Signed-off-by: Roger Quadros <rogerq@ti.com>
    Signed-off-by: Felipe Balbi <balbi@ti.com>

Probably should have added the detail that top and bottom
halves are guaranteed to be mutually exclusive, sorry.
diff mbox

Patch

diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 0c25704..1e8bdf8 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -2665,8 +2665,6 @@  static irqreturn_t dwc3_interrupt(int irq, void *_dwc)
 	int				i;
 	irqreturn_t			ret = IRQ_NONE;
 
-	spin_lock(&dwc->lock);
-
 	for (i = 0; i < dwc->num_event_buffers; i++) {
 		irqreturn_t status;
 
@@ -2675,8 +2673,6 @@  static irqreturn_t dwc3_interrupt(int irq, void *_dwc)
 			ret = status;
 	}
 
-	spin_unlock(&dwc->lock);
-
 	return ret;
 }