From patchwork Tue Jun 28 10:13:33 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 70986 Delivered-To: patch@linaro.org Received: by 10.140.28.4 with SMTP id 4csp1502758qgy; Tue, 28 Jun 2016 03:16:07 -0700 (PDT) X-Received: by 10.36.188.65 with SMTP id n62mr2482460ite.61.1467108967789; Tue, 28 Jun 2016 03:16:07 -0700 (PDT) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id u7si23197650ioe.106.2016.06.28.03.16.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 28 Jun 2016 03:16:07 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bHq1U-0004uF-Dz; Tue, 28 Jun 2016 10:13:40 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bHq1T-0004tQ-BD for xen-devel@lists.xensource.com; Tue, 28 Jun 2016 10:13:39 +0000 Received: from [193.109.254.147] by server-13.bemta-14.messagelabs.com id 7D/DC-09524-2DD42775; Tue, 28 Jun 2016 10:13:38 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrDIsWRWlGSWpSXmKPExsVysyfVTfeSb1G 4wZxpTBb3prxnd2D02N63iz2AMYo1My8pvyKBNeP9skNMBScUK5qvzWZqYPwh2cXIxSEksJFR on36SlYIZzejxMzbcxi7GDk4hAVCJa6eUQeJiwh0MUpcf/eGHaKohVHi/ZtnYA6zwDRGiY1/r 7J1MXJysAloStz5/IkJxOYVUJM41/IbbBKLgKrE6cUVIGFRgXCJvnm7oUoEJU7OfMICYnMKeE hcuNzADGIzC+hJ7Lj+ixXClpfY/nYO8wRGvllIWmYhKZuFpGwBI/MqRo3i1KKy1CJdQzO9pKL M9IyS3MTMHF1DQxO93NTi4sT01JzEpGK95PzcTYzAkKtnYGDcwfj1tOchRkkOJiVR3gUMReFC fEn5KZUZicUZ8UWlOanFhxg1ODgEetesvsAoxZKXn5eqJMGrAQxtIcGi1PTUirTMHGBUwJRKc PAoifCe8QFK8xYXJOYWZ6ZDpE4xKkqJ854DSQiAJDJK8+DaYJF4iVFWSpiXkYGBQYinILUoN7 MEVf4VozgHo5Iw7zOQKTyZeSVw018BLWYCWsxanQ+yuCQRISXVwGjxfI2Wwc//B+eeiL8d/np 6i5ZG2SVfpS3J5npLtQ/4ZLO2fL9euKTjgKz8e5OfVo9/GRxY2CvvONfttnyA3ZqIzf99N1VY Wl57HKZrqf/c6OnPtRPWvdEx4ynZtSYz6ue6/+bubiXz097W/Hj/6YyAk9Dn+qvRu9bzbuC5M KWpSUR3/Y4lEcpKLMUZiYZazEXFiQCMLDTLvwIAAA== X-Env-Sender: julien.grall@arm.com X-Msg-Ref: server-5.tower-27.messagelabs.com!1467108817!50542181!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 22653 invoked from network); 28 Jun 2016 10:13:37 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-5.tower-27.messagelabs.com with SMTP; 28 Jun 2016 10:13:37 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4750B2F; Tue, 28 Jun 2016 03:14:28 -0700 (PDT) Received: from [10.1.215.28] (e108454-lin.cambridge.arm.com [10.1.215.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 072183F41F; Tue, 28 Jun 2016 03:13:34 -0700 (PDT) To: Shanker Donthineni , xen-devel , Stefano Stabellini References: <1467059622-14786-1-git-send-email-shankerd@codeaurora.org> <1467059622-14786-9-git-send-email-shankerd@codeaurora.org> From: Julien Grall Message-ID: <57724DCD.2050602@arm.com> Date: Tue, 28 Jun 2016 11:13:33 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0 MIME-Version: 1.0 In-Reply-To: <1467059622-14786-9-git-send-email-shankerd@codeaurora.org> Cc: Philip Elcan , Steve Capper , Vikram Sethi , Wei Chen Subject: Re: [Xen-devel] [PATCH V3 09/10] xen/arm: io: Use binary search for mmio handler lookup X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" Hi Shanker, On 27/06/16 21:33, Shanker Donthineni wrote: > As the number of I/O handlers increase, the overhead associated with > linear lookup also increases. The system might have maximum of 144 > (assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario, > it would require 144 iterations for finding a matching handler. Now > it is time for us to change from linear (complexity O(n)) to a binary > search (complexity O(log n) for reducing mmio handler lookup overhead. However, you will add contention because the code is using a spinlock. I am planning to send the following patch as a prerequisite of this series to switch from spinlock to read-write lock: commit b69e975ce25b2c94f7205b0b8329f351327fbcf7 Author: Julien Grall Date: Tue Jun 28 11:04:11 2016 +0100 xen/arm: io: Protect the handlers with a read-write lock Currently, accessing the I/O handlers does not require to take a lock because new handlers are always added at the end of the array. In a follow-up patch, this array will be sort to optimize the look up. Given that most of the time the I/O handlers will not be modify, using a spinlock will add contention when multiple vCPU are accessing the emulated MMIOs. So use a read-write lock to protected the handlers. Finally, take the opportunity to re-indent correctly domain_io_init. Signed-off-by: Julien Grall diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index 0156755..5a96836 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -70,23 +70,39 @@ static int handle_write(const struct mmio_handler *handler, struct vcpu *v, handler->priv); } -int handle_mmio(mmio_info_t *info) +static const struct mmio_handler *find_mmio_handler(struct domain *d, + paddr_t gpa) { - struct vcpu *v = current; - int i; - const struct mmio_handler *handler = NULL; - const struct vmmio *vmmio = &v->domain->arch.vmmio; + const struct mmio_handler *handler; + unsigned int i; + struct vmmio *vmmio = &d->arch.vmmio; + + read_lock(&vmmio->lock); for ( i = 0; i < vmmio->num_entries; i++ ) { handler = &vmmio->handlers[i]; - if ( (info->gpa >= handler->addr) && - (info->gpa < (handler->addr + handler->size)) ) + if ( (gpa >= handler->addr) && + (gpa < (handler->addr + handler->size)) ) break; } if ( i == vmmio->num_entries ) + handler = NULL; + + read_unlock(&vmmio->lock); + + return handler; +} + +int handle_mmio(mmio_info_t *info) +{ + struct vcpu *v = current; + const struct mmio_handler *handler = NULL; + + handler = find_mmio_handler(v->domain, info->gpa); + if ( !handler ) return 0; if ( info->dabt.write ) @@ -104,7 +120,7 @@ void register_mmio_handler(struct domain *d, BUG_ON(vmmio->num_entries >= MAX_IO_HANDLER); - spin_lock(&vmmio->lock); + write_lock(&vmmio->lock); handler = &vmmio->handlers[vmmio->num_entries]; @@ -113,24 +129,17 @@ void register_mmio_handler(struct domain *d, handler->size = size; handler->priv = priv; - /* - * handle_mmio is not using the lock to avoid contention. - * Make sure the other processors see the new handler before - * updating the number of entries - */ - dsb(ish); - vmmio->num_entries++; - spin_unlock(&vmmio->lock); + write_unlock(&vmmio->lock); } int domain_io_init(struct domain *d) { - spin_lock_init(&d->arch.vmmio.lock); - d->arch.vmmio.num_entries = 0; + rwlock_init(&d->arch.vmmio.lock); + d->arch.vmmio.num_entries = 0; - return 0; + return 0; } /* diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h index da1cc2e..32f10f2 100644 --- a/xen/include/asm-arm/mmio.h +++ b/xen/include/asm-arm/mmio.h @@ -20,6 +20,7 @@ #define __ASM_ARM_MMIO_H__ #include +#include #include #include @@ -51,7 +52,7 @@ struct mmio_handler { struct vmmio { int num_entries; - spinlock_t lock; + rwlock_t lock; struct mmio_handler handlers[MAX_IO_HANDLER]; };