From patchwork Wed Apr 29 12:25:38 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jon Medhurst \(Tixy\)" X-Patchwork-Id: 47710 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f71.google.com (mail-wg0-f71.google.com [74.125.82.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E332720553 for ; Wed, 29 Apr 2015 12:25:57 +0000 (UTC) Received: by wgin8 with SMTP id n8sf6087325wgi.0 for ; Wed, 29 Apr 2015 05:25:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:message-id:subject:from:to:cc:date :in-reply-to:references:content-type:mime-version :content-transfer-encoding:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=ZbR/Op8jioIY9yLwXX4ll3157AJTRHaR9bO5SSzeWNY=; b=WPii8f+6Rm7Y1A+WakW+yh6+8xlw88N8kemv0wRkdcJgXSTzPiy+QX5W7ol0JzzA+N q5HAuxU/nGbW+d1Vd2O9l36HDpkIStCw2PER//t4Fie+OszmzAzLfLs1J8q0CGWw2TeS YTG3mWEbRhFEDHSu/Tvq9qqL7x0uvEO0VlhrzDLHIqk3oJXADizQgYifbT/TrOa9Josw tXVf2Fyq8Yc6PY7jYwdEJ8AuxwPWwncsGGRA4H8XOyoizQYcQYkn+oBhigPYNPAZY0iX HRXX12IyTXd7H8dI3Opp+ukijYTIprJLpxV8f1QuEmS1ar1IiJADxyNSeNroJH03QvHM z3JQ== X-Gm-Message-State: ALoCoQlGMbaUrY9VP//LPgqUI5466F04rFA3r2v0rZxZLgid4pZIkkeUQH1Pw6uEnIBLKE9FE/sp X-Received: by 10.112.138.2 with SMTP id qm2mr13002450lbb.19.1430310357116; Wed, 29 Apr 2015 05:25:57 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.5.101 with SMTP id r5ls148782lar.78.gmail; Wed, 29 Apr 2015 05:25:57 -0700 (PDT) X-Received: by 10.152.116.102 with SMTP id jv6mr18615668lab.50.1430310356970; Wed, 29 Apr 2015 05:25:56 -0700 (PDT) Received: from mail-la0-f50.google.com (mail-la0-f50.google.com. [209.85.215.50]) by mx.google.com with ESMTPS id pd4si19264742lbc.40.2015.04.29.05.25.56 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Apr 2015 05:25:56 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) client-ip=209.85.215.50; Received: by laat2 with SMTP id t2so18608376laa.1 for ; Wed, 29 Apr 2015 05:25:56 -0700 (PDT) X-Received: by 10.112.29.36 with SMTP id g4mr19201422lbh.56.1430310356791; Wed, 29 Apr 2015 05:25:56 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp2469652lbt; Wed, 29 Apr 2015 05:25:55 -0700 (PDT) X-Received: by 10.68.57.237 with SMTP id l13mr41138529pbq.100.1430310354900; Wed, 29 Apr 2015 05:25:54 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id pm11si39258267pdb.55.2015.04.29.05.25.54; Wed, 29 Apr 2015 05:25:54 -0700 (PDT) Received-SPF: none (google.com: devicetree-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1422990AbbD2MZr (ORCPT + 6 others); Wed, 29 Apr 2015 08:25:47 -0400 Received: from queue01c.mail.zen.net.uk ([212.23.3.237]:40060 "EHLO queue01c.mail.zen.net.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1422838AbbD2MZo (ORCPT ); Wed, 29 Apr 2015 08:25:44 -0400 Received: from [212.23.1.1] (helo=smarthost01a.mail.zen.net.uk) by queue01c.mail.zen.net.uk with esmtp (Exim 4.72) (envelope-from ) id 1YnR3e-0000Jz-ST; Wed, 29 Apr 2015 12:25:42 +0000 Received: from [82.69.122.217] (helo=linaro1) by smarthost01a.mail.zen.net.uk with esmtpsa (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from ) id 1YnR3a-000Dp8-Oo; Wed, 29 Apr 2015 12:25:38 +0000 Message-ID: <1430310338.27241.45.camel@linaro.org> Subject: Re: [PATCH 1/4] mailbox: add support for System Control and Power Interface(SCPI) protocol From: "Jon Medhurst (Tixy)" To: Sudeep Holla Cc: "linux-kernel@vger.kernel.org" , Liviu Dudau , Lorenzo Pieralisi , Rob Herring , Mark Rutland , Jassi Brar , "devicetree@vger.kernel.org" Date: Wed, 29 Apr 2015 13:25:38 +0100 In-Reply-To: <1430307828.27241.32.camel@linaro.org> References: <1430134846-24320-1-git-send-email-sudeep.holla@arm.com> <1430134846-24320-2-git-send-email-sudeep.holla@arm.com> <1430229283.3321.40.camel@linaro.org> <5540B840.1030900@arm.com> <1430307828.27241.32.camel@linaro.org> X-Mailer: Evolution 3.12.9-1+b1 Mime-Version: 1.0 X-Originating-smarthost01a-IP: [82.69.122.217] Sender: devicetree-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: devicetree@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: tixy@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , On Wed, 2015-04-29 at 12:43 +0100, Jon Medhurst (Tixy) wrote: > On Wed, 2015-04-29 at 11:53 +0100, Sudeep Holla wrote: > > On 28/04/15 14:54, Jon Medhurst (Tixy) wrote: > > > On Mon, 2015-04-27 at 12:40 +0100, Sudeep Holla wrote: > [...] > > >> + int ret; > > >> + u8 token, chan; > > >> + struct scpi_xfer *msg; > > >> + struct scpi_chan *scpi_chan; > > >> + > > >> + chan = atomic_inc_return(&scpi_info->next_chan) % scpi_info->num_chans; > > >> + scpi_chan = scpi_info->channels + chan; > > >> + > > >> + msg = get_scpi_xfer(scpi_chan); > > >> + if (!msg) > > >> + return -ENOMEM; > > >> + > > >> + token = atomic_inc_return(&scpi_chan->token) & CMD_TOKEN_ID_MASK; > > > > > > So, this 8 bit token is what's used to 'uniquely' identify a pending > > > command. But as it's just an incrementing value, then if one command > > > gets delayed for long enough that 256 more are issued then we will have > > > a non-unique value and scpi_process_cmd can go wrong. > > > > > > > IMO by the time 256 message are queued up and serviced we would timeout > > on the initial command. Moreover the core mailbox has sent the mailbox > > length to 20(MBOX_TX_QUEUE_LEN) which needs to removed to even get the > > remote chance of hit the corner case. > > The corner case can be hit even if the queue length is only 2, because > other processes/cpus can use the other message we don't own here and > they can send then receive a message using that, 256 times. The corner > case doesn't require 256 simultaneous outstanding requests. > > That is the reason I suggested that rather than using an incrementing > value for the 'unique' token, that each message instead contain the > value of the token to use with it. Of course, I failed to mention that this solution to this problem makes thing worse for the situation where we timeout messages, because the same token will get reused quicker in that case. So, in practice, if we have timeouts, and a unchangeable protocol limitation of 256 tokens, then using those tokens in order, for each message sent is probably the best we can do. Perhaps that's the clue, generate and add the token to the message, just before transmission via the MHU, at a point where we know no other request can overtake us. In scpi_tx_prepare? Perhaps, it might also be good to only use up a token if we are expecting a response, and use zero for other messages? Something like this totally untested patch... diff --git a/drivers/mailbox/scpi_protocol.c b/drivers/mailbox/scpi_protocol.c index c74575b..5818d9b 100644 --- a/drivers/mailbox/scpi_protocol.c +++ b/drivers/mailbox/scpi_protocol.c @@ -286,14 +286,23 @@ static void scpi_tx_prepare(struct mbox_client *c, void *msg) struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); struct scpi_shared_mem *mem = (struct scpi_shared_mem *)ch->tx_payload; - mem->command = cpu_to_le32(t->cmd); if (t->tx_buf) memcpy_toio(mem->payload, t->tx_buf, t->tx_len); if (t->rx_buf) { + int token; spin_lock_irqsave(&ch->rx_lock, flags); + /* + * Presumably we can do this token setting outside + * spinlock and still be safe from concurrency? + */ + do + token = (++ch->token) & CMD_TOKEN_ID_MASK; + while(!token); + t->cmd |= token << CMD_TOKEN_ID_SHIFT; list_add_tail(&t->node, &ch->rx_pending); spin_unlock_irqrestore(&ch->rx_lock, flags); } + mem->command = cpu_to_le32(t->cmd); } static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch) @@ -322,7 +331,7 @@ static int scpi_send_message(u8 cmd, void *tx_buf, unsigned int len, void *rx_buf) { int ret; - u8 token, chan; + u8 chan; struct scpi_xfer *msg; struct scpi_chan *scpi_chan; @@ -333,10 +342,8 @@ scpi_send_message(u8 cmd, void *tx_buf, unsigned int len, void *rx_buf) if (!msg) return -ENOMEM; - token = atomic_inc_return(&scpi_chan->token) & CMD_TOKEN_ID_MASK; - msg->slot = BIT(SCPI_SLOT); - msg->cmd = PACK_SCPI_CMD(cmd, token, len); + msg->cmd = PACK_SCPI_CMD(cmd, 0, len); msg->tx_buf = tx_buf; msg->tx_len = len; msg->rx_buf = rx_buf;