From patchwork Thu May 18 20:21:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 100131 Delivered-To: patch@linaro.org Received: by 10.140.96.100 with SMTP id j91csp8144qge; Thu, 18 May 2017 13:23:08 -0700 (PDT) X-Received: by 10.99.121.4 with SMTP id u4mr6365320pgc.167.1495138988127; Thu, 18 May 2017 13:23:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1495138988; cv=none; d=google.com; s=arc-20160816; b=DU0g8yWtWCq5nrXQs/tEjjrjWmX4aUNIZdlFddQbNvKO/9uiSbAoGgTpkl8uyoAr2Y PxDXUUlnl/+OZFWpI364IUPSPtuYk9y62VWuS8FzHbQSb1AI0x/tGn/Emgvv7OkX5oty 1TmWetxuTqbccamjuWtbCuJOSZiJqqgsxpef0eUQkZkZ0Q8bear4CW8JZJyz0Dt8RokK ODjbgUc/0jdPdkMYLPffnDcI3zVBn1lCPdXsOByA/n5s4KuG2cVOPs/apgqMh9Pc6Nf0 V1RSrAi105Imeko0IYZJCoQD4DNin34z/vn3GyrMmLdUZEKsFZ+8LxO3cYKuH516Qdb/ m+cA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=fGPUN7Nfpo7LumnbP95OzjQbXwU2pLl5PWPGcrerjQk=; b=BMjYSjEVLXRT3wMEeWbH62kpIfhl5MfOjH7k30yCS+aSG2BMDyjtI4dz8pWtDco1w0 gup9o4AlpKLYiw8ntgJT2OOmbHsLVDex7eHdZYOdgkxZPEpvBHXAkoWFu9PJ80/vBomv ps6UCuq3eLQrqf4tw3K/+sbAwHUj9qLOscE57jZqMly/MJr8rZAJZ8LPn9J9UUx9686t jPHEuMqhQQcdqTlPhreQ9HrAIjxyDx21x7YyemvB385lG9CxuJvBio9hkvXdDEjgcMcW E6iCrgApLRRgL4HYdTBDcYZGa+RL4k44115+62tgC+VDc9qZYumHsd2iOZFF2td05K8N vUMA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l33si6172718pld.320.2017.05.18.13.23.07; Thu, 18 May 2017 13:23:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755696AbdERUWp (ORCPT + 25 others); Thu, 18 May 2017 16:22:45 -0400 Received: from mout.kundenserver.de ([212.227.126.130]:55332 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753478AbdERUWi (ORCPT ); Thu, 18 May 2017 16:22:38 -0400 Received: from wuerfel.lan ([78.42.17.5]) by mrelayeu.kundenserver.de (mreue002 [212.227.15.129]) with ESMTPA (Nemesis) id 0LZqH7-1dqbiy2MpB-00lqOx; Thu, 18 May 2017 22:22:29 +0200 From: Arnd Bergmann To: Srinivas Pandruvada , Jiri Kosina Cc: linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, arnd@arndb.de Subject: [PATCH v2 2/5] HID: intel_ish-hid: clarify locking in client code Date: Thu, 18 May 2017 22:21:41 +0200 Message-Id: <20170518202144.3482304-3-arnd@arndb.de> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170518202144.3482304-1-arnd@arndb.de> References: <20170518202144.3482304-1-arnd@arndb.de> X-Provags-ID: V03:K0:ygFbR5Ky8TCcBy9ovV5Hd9mFuht7cEajOoUoyFAm8AZ4j/95t86 uoA2GYy/9GCmlm2TVLWs5+B7S67SiQPgY6oExKBVBk3nx0rnoEZ090yLQOzfahPznp2tqJF 4YOYwAxrBjFHgMMPo7GYYCgfU/nUVj4mfMnB3+rcQpHLtZMN0DX86Ft+pi8u5okxMxvN1En 6Wm9R3boUfN4fOTEswI6A== X-UI-Out-Filterresults: notjunk:1; V01:K0:zY1s/K4csJ0=:gR2YJj/APtMrMQ78ZPvZm1 3Oa4Qqfqvjlp+qBNAMEKFiOr4FT7WpJSRIg8C2kF1qXFR7tLMmSLXIrJ6g6WaFq2xQ+sKqjw+ R70rlGC5NSU+rHUxdE21N50E0bbRB5s9zczOrhpn2L2c+C1AzLPR186KHROJyLlVJF9fQegPo vdkFei3UbNvLotggUTB4tyiqfnGLRGXipNqU2jZ6lbD4pdFhxvyFxomW4SL5pjkfERJRWBzjI rxH2SNJX7FPvhWJ+7xOLi5RRoZNswuvGkMZ7VNMl1LCa7rs6lyJ60yp/ZML+u0wDwKjpHOsAE sONpNThizbUxNAaefd81q3A9Tmm8cHtL57JjvZ9o0AH97zqXophwh1AlFl8godSbbpR7XZcOT mGU82g1ewIgaD3eyvrXqhDTgJygm8PY9NZPgTSHLliY3AmVrdHVDvcgQMYCQe/cPakQpK7Qrj tVlqSpeTnNCrCspnqJHoDRjaALKLXCiwKoBNmDLcs7fYRzBhd5EgFdgSwWPkCUVKGM+qVYEHx juKKmpnSogRgzW8ZqOkQXE5lulqm3p98hvFfOlX3FYM6R/YT1aNR35fj5C/2VPrhs7cVqE2sm ypMdO/vPhNREK8NWFs0vuv2ZOQ6gZXZrgwAEHBqN9A3iC52PDJGk2tj6RV1CW2fOL7xH8edqt ihiP4pQALB0yw9hQdGiGBDZZ0kIvets+ohVvPFj5yNORLwaBCKXkoZBYocVWvT23gTZ8= Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I was trying to understand this code while working on a warning fix and the locking made no sense: spin_lock_irqsave() is pointless when run inside of an interrupt handler or nested inside of another spin_lock_irq() or spin_lock_irqsave(). Here it turned out that the comment above the function is wrong, as both recv_ishtp_cl_msg_dma() and recv_ishtp_cl_msg() can in fact be called from a work queue rather than an ISR, so we do have to use the irqsave() version once. This fixes the comments accordingly, removes the misleading 'dev_flags' variable and modifies the inner spinlock to not use 'irqsave'. No functional change is intended, this is just for readability and it slightly simplifies the object code. Signed-off-by: Arnd Bergmann --- drivers/hid/intel-ish-hid/ishtp/client.c | 43 +++++++++++++------------------- 1 file changed, 17 insertions(+), 26 deletions(-) -- 2.9.0 diff --git a/drivers/hid/intel-ish-hid/ishtp/client.c b/drivers/hid/intel-ish-hid/ishtp/client.c index 78d393e616a4..f54689ee67e1 100644 --- a/drivers/hid/intel-ish-hid/ishtp/client.c +++ b/drivers/hid/intel-ish-hid/ishtp/client.c @@ -803,7 +803,7 @@ void ishtp_cl_send_msg(struct ishtp_device *dev, struct ishtp_cl *cl) * @ishtp_hdr: Pointer to message header * * Receive and dispatch ISHTP client messages. This function executes in ISR - * context + * or work queue context */ void recv_ishtp_cl_msg(struct ishtp_device *dev, struct ishtp_msg_hdr *ishtp_hdr) @@ -813,7 +813,6 @@ void recv_ishtp_cl_msg(struct ishtp_device *dev, struct ishtp_cl_rb *new_rb; unsigned char *buffer = NULL; struct ishtp_cl_rb *complete_rb = NULL; - unsigned long dev_flags; unsigned long flags; int rb_count; @@ -828,7 +827,7 @@ void recv_ishtp_cl_msg(struct ishtp_device *dev, goto eoi; } - spin_lock_irqsave(&dev->read_list_spinlock, dev_flags); + spin_lock_irqsave(&dev->read_list_spinlock, flags); rb_count = -1; list_for_each_entry(rb, &dev->read_list.list, list) { ++rb_count; @@ -840,8 +839,7 @@ void recv_ishtp_cl_msg(struct ishtp_device *dev, /* If no Rx buffer is allocated, disband the rb */ if (rb->buffer.size == 0 || rb->buffer.data == NULL) { - spin_unlock_irqrestore(&dev->read_list_spinlock, - dev_flags); + spin_unlock_irqrestore(&dev->read_list_spinlock, flags); dev_err(&cl->device->dev, "Rx buffer is not allocated.\n"); list_del(&rb->list); @@ -857,8 +855,7 @@ void recv_ishtp_cl_msg(struct ishtp_device *dev, * back FC, so communication will be stuck anyway) */ if (rb->buffer.size < ishtp_hdr->length + rb->buf_idx) { - spin_unlock_irqrestore(&dev->read_list_spinlock, - dev_flags); + spin_unlock_irqrestore(&dev->read_list_spinlock, flags); dev_err(&cl->device->dev, "message overflow. size %d len %d idx %ld\n", rb->buffer.size, ishtp_hdr->length, @@ -884,14 +881,13 @@ void recv_ishtp_cl_msg(struct ishtp_device *dev, * the whole msg arrived, send a new FC, and add a new * rb buffer for the next coming msg */ - spin_lock_irqsave(&cl->free_list_spinlock, flags); + spin_lock(&cl->free_list_spinlock); if (!list_empty(&cl->free_rb_list.list)) { new_rb = list_entry(cl->free_rb_list.list.next, struct ishtp_cl_rb, list); list_del_init(&new_rb->list); - spin_unlock_irqrestore(&cl->free_list_spinlock, - flags); + spin_unlock(&cl->free_list_spinlock); new_rb->cl = cl; new_rb->buf_idx = 0; INIT_LIST_HEAD(&new_rb->list); @@ -900,8 +896,7 @@ void recv_ishtp_cl_msg(struct ishtp_device *dev, ishtp_hbm_cl_flow_control_req(dev, cl); } else { - spin_unlock_irqrestore(&cl->free_list_spinlock, - flags); + spin_unlock(&cl->free_list_spinlock); } } /* One more fragment in message (even if this was last) */ @@ -914,7 +909,7 @@ void recv_ishtp_cl_msg(struct ishtp_device *dev, break; } - spin_unlock_irqrestore(&dev->read_list_spinlock, dev_flags); + spin_unlock_irqrestore(&dev->read_list_spinlock, flags); /* If it's nobody's message, just read and discard it */ if (!buffer) { uint8_t rd_msg_buf[ISHTP_RD_MSG_BUF_SIZE]; @@ -941,7 +936,7 @@ void recv_ishtp_cl_msg(struct ishtp_device *dev, * @hbm: hbm buffer * * Receive and dispatch ISHTP client messages using DMA. This function executes - * in ISR context + * in ISR or work queue context */ void recv_ishtp_cl_msg_dma(struct ishtp_device *dev, void *msg, struct dma_xfer_hbm *hbm) @@ -951,10 +946,10 @@ void recv_ishtp_cl_msg_dma(struct ishtp_device *dev, void *msg, struct ishtp_cl_rb *new_rb; unsigned char *buffer = NULL; struct ishtp_cl_rb *complete_rb = NULL; - unsigned long dev_flags; unsigned long flags; - spin_lock_irqsave(&dev->read_list_spinlock, dev_flags); + spin_lock_irqsave(&dev->read_list_spinlock, flags); + list_for_each_entry(rb, &dev->read_list.list, list) { cl = rb->cl; if (!cl || !(cl->host_client_id == hbm->host_client_id && @@ -966,8 +961,7 @@ void recv_ishtp_cl_msg_dma(struct ishtp_device *dev, void *msg, * If no Rx buffer is allocated, disband the rb */ if (rb->buffer.size == 0 || rb->buffer.data == NULL) { - spin_unlock_irqrestore(&dev->read_list_spinlock, - dev_flags); + spin_unlock_irqrestore(&dev->read_list_spinlock, flags); dev_err(&cl->device->dev, "response buffer is not allocated.\n"); list_del(&rb->list); @@ -983,8 +977,7 @@ void recv_ishtp_cl_msg_dma(struct ishtp_device *dev, void *msg, * back FC, so communication will be stuck anyway) */ if (rb->buffer.size < hbm->msg_length) { - spin_unlock_irqrestore(&dev->read_list_spinlock, - dev_flags); + spin_unlock_irqrestore(&dev->read_list_spinlock, flags); dev_err(&cl->device->dev, "message overflow. size %d len %d idx %ld\n", rb->buffer.size, hbm->msg_length, rb->buf_idx); @@ -1008,14 +1001,13 @@ void recv_ishtp_cl_msg_dma(struct ishtp_device *dev, void *msg, * the whole msg arrived, send a new FC, and add a new * rb buffer for the next coming msg */ - spin_lock_irqsave(&cl->free_list_spinlock, flags); + spin_lock(&cl->free_list_spinlock); if (!list_empty(&cl->free_rb_list.list)) { new_rb = list_entry(cl->free_rb_list.list.next, struct ishtp_cl_rb, list); list_del_init(&new_rb->list); - spin_unlock_irqrestore(&cl->free_list_spinlock, - flags); + spin_unlock(&cl->free_list_spinlock); new_rb->cl = cl; new_rb->buf_idx = 0; INIT_LIST_HEAD(&new_rb->list); @@ -1024,8 +1016,7 @@ void recv_ishtp_cl_msg_dma(struct ishtp_device *dev, void *msg, ishtp_hbm_cl_flow_control_req(dev, cl); } else { - spin_unlock_irqrestore(&cl->free_list_spinlock, - flags); + spin_unlock(&cl->free_list_spinlock); } /* One more fragment in message (this is always last) */ @@ -1038,7 +1029,7 @@ void recv_ishtp_cl_msg_dma(struct ishtp_device *dev, void *msg, break; } - spin_unlock_irqrestore(&dev->read_list_spinlock, dev_flags); + spin_unlock_irqrestore(&dev->read_list_spinlock, flags); /* If it's nobody's message, just read and discard it */ if (!buffer) { dev_err(dev->devc, "Dropped Rx (DMA) msg - no request\n");