From patchwork Fri Sep 9 14:00:41 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 75896 Delivered-To: patch@linaro.org Received: by 10.140.106.11 with SMTP id d11csp362148qgf; Fri, 9 Sep 2016 07:00:58 -0700 (PDT) X-Received: by 10.67.22.161 with SMTP id ht1mr6819218pad.7.1473429657547; Fri, 09 Sep 2016 07:00:57 -0700 (PDT) Return-Path: Received: from ml01.01.org (ml01.01.org. [2001:19d0:306:5::1]) by mx.google.com with ESMTPS id b10si4059881paf.144.2016.09.09.07.00.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 09 Sep 2016 07:00:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of edk2-devel-bounces@lists.01.org designates 2001:19d0:306:5::1 as permitted sender) client-ip=2001:19d0:306:5::1; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of edk2-devel-bounces@lists.01.org designates 2001:19d0:306:5::1 as permitted sender) smtp.mailfrom=edk2-devel-bounces@lists.01.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 20B141A1E73; Fri, 9 Sep 2016 07:00:57 -0700 (PDT) X-Original-To: edk2-devel@lists.01.org Delivered-To: edk2-devel@lists.01.org Received: from mail-wm0-x231.google.com (mail-wm0-x231.google.com [IPv6:2a00:1450:400c:c09::231]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 94FD51A1DEA for ; Fri, 9 Sep 2016 07:00:55 -0700 (PDT) Received: by mail-wm0-x231.google.com with SMTP id w12so34938816wmf.0 for ; Fri, 09 Sep 2016 07:00:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BhN4gLWsQgdAY7OoS7bwnl+SlxM2Nj4pnJZftDArglc=; b=AuQKB2eSPHQMAQf2eAzc2UVkOIO39X1d5DlmakOGMnqjpH46eeGMP55vR+QE7Vi516 h0+IlStQxj58G+V30fK3P0hT6bnc50+UX+R1iabYF4rIaclgIGa+IOJ+Jw570LvEvTqz eVm+e2GWV+Bz1zUU81qB7vKmxU07t5xLYdKXs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BhN4gLWsQgdAY7OoS7bwnl+SlxM2Nj4pnJZftDArglc=; b=ZNOf4GxY1Jd4YrZmOXoa3O1DMMhvoWXxL1Q5NRWIoXG/0nZ30C6F2300gLE/TMDrqc DBDXR4Z8P4C/8YfIuaUFGIOrZvBD5+11dlwgDqsJb02i+9QV1C9Z6bCT8HCKySWetsMa vzXwBW3pzNIxMIifyV149cOPT5DA07vEDsmw5KpE1bL/J2CliNFz6eX2uFmK793Fe9Nq o4kDI5aAUtEjvcfdlZ5QHDhtsvbDTMY6Hp9HXbgodfEhWE/JkaYcc82j85V/FbQ3qD7a glC9HVdzPvI72d+gWZTcwiV5dBB6FiMxoPrZn5xe9LsZ9D1TxUuvp63yZl1oGJSU0LTq HvDg== X-Gm-Message-State: AE9vXwOUip/O8X5afek1kwkDGUTZF4ldf4muIxPKfybD9o9/vb4V9NfYgevoY/wpdm4ehWhH X-Received: by 10.28.173.205 with SMTP id w196mr3331869wme.86.1473429654138; Fri, 09 Sep 2016 07:00:54 -0700 (PDT) Received: from localhost.localdomain ([105.190.180.180]) by smtp.gmail.com with ESMTPSA id 1sm9708769wmm.0.2016.09.09.07.00.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 09 Sep 2016 07:00:53 -0700 (PDT) From: Ard Biesheuvel To: edk2-devel@lists.01.org, liming.gao@intel.com, leif.lindholm@linaro.org, michael.d.kinney@intel.com Date: Fri, 9 Sep 2016 15:00:41 +0100 Message-Id: <1473429644-13480-2-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1473429644-13480-1-git-send-email-ard.biesheuvel@linaro.org> References: <1473429644-13480-1-git-send-email-ard.biesheuvel@linaro.org> Subject: [edk2] [PATCH v5 1/4] MdePkg/BaseMemoryLib: widen aligned accesses to 32 or 64 bits X-BeenThere: edk2-devel@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: EDK II Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ard Biesheuvel MIME-Version: 1.0 Errors-To: edk2-devel-bounces@lists.01.org Sender: "edk2-devel" Since the default BaseMemoryLib should be callable from any context, including ones where unaligned accesses are not allowed, it implements InternalCopyMem() and InternalSetMem() using byte accesses only. However, especially in a context where the MMU is off, such narrow accesses may be disproportionately costly, and so if the size and alignment of the access allow it, use 32-bit or even 64-bit loads and stores (the latter may be beneficial even on a 32-bit architectures like ARM, which has load pair/store pair instructions) Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Ard Biesheuvel Reviewed-by: Liming Gao --- MdePkg/Library/BaseMemoryLib/BaseMemoryLib.inf | 2 +- MdePkg/Library/BaseMemoryLib/CopyMem.c | 112 ++++++++++++++++++-- MdePkg/Library/BaseMemoryLib/SetMem.c | 40 ++++++- 3 files changed, 140 insertions(+), 14 deletions(-) -- 2.7.4 _______________________________________________ edk2-devel mailing list edk2-devel@lists.01.org https://lists.01.org/mailman/listinfo/edk2-devel diff --git a/MdePkg/Library/BaseMemoryLib/BaseMemoryLib.inf b/MdePkg/Library/BaseMemoryLib/BaseMemoryLib.inf index 6d906e93faf3..358eeed4f449 100644 --- a/MdePkg/Library/BaseMemoryLib/BaseMemoryLib.inf +++ b/MdePkg/Library/BaseMemoryLib/BaseMemoryLib.inf @@ -26,7 +26,7 @@ [Defines] # -# VALID_ARCHITECTURES = IA32 X64 IPF EBC +# VALID_ARCHITECTURES = IA32 X64 IPF EBC ARM AARCH64 # [Sources] diff --git a/MdePkg/Library/BaseMemoryLib/CopyMem.c b/MdePkg/Library/BaseMemoryLib/CopyMem.c index 37f03660df5f..6f4fd900df5d 100644 --- a/MdePkg/Library/BaseMemoryLib/CopyMem.c +++ b/MdePkg/Library/BaseMemoryLib/CopyMem.c @@ -4,6 +4,9 @@ particular platform easily if an optimized version is desired. Copyright (c) 2006 - 2010, Intel Corporation. All rights reserved.
+ Copyright (c) 2012 - 2013, ARM Ltd. All rights reserved.
+ Copyright (c) 2016, Linaro Ltd. All rights reserved.
+ This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -44,18 +47,107 @@ InternalMemCopyMem ( // volatile UINT8 *Destination8; CONST UINT8 *Source8; + volatile UINT32 *Destination32; + CONST UINT32 *Source32; + volatile UINT64 *Destination64; + CONST UINT64 *Source64; + UINTN Alignment; + + if ((((UINTN)DestinationBuffer & 0x7) == 0) && (((UINTN)SourceBuffer & 0x7) == 0) && (Length >= 8)) { + if (SourceBuffer > DestinationBuffer) { + Destination64 = (UINT64*)DestinationBuffer; + Source64 = (CONST UINT64*)SourceBuffer; + while (Length >= 8) { + *(Destination64++) = *(Source64++); + Length -= 8; + } + + // Finish if there are still some bytes to copy + Destination8 = (UINT8*)Destination64; + Source8 = (CONST UINT8*)Source64; + while (Length-- != 0) { + *(Destination8++) = *(Source8++); + } + } else if (SourceBuffer < DestinationBuffer) { + Destination64 = (UINT64*)((UINTN)DestinationBuffer + Length); + Source64 = (CONST UINT64*)((UINTN)SourceBuffer + Length); + + // Destination64 and Source64 were aligned on a 64-bit boundary + // but if length is not a multiple of 8 bytes then they won't be + // anymore. + + Alignment = Length & 0x7; + if (Alignment != 0) { + Destination8 = (UINT8*)Destination64; + Source8 = (CONST UINT8*)Source64; + + while (Alignment-- != 0) { + *(--Destination8) = *(--Source8); + --Length; + } + Destination64 = (UINT64*)Destination8; + Source64 = (CONST UINT64*)Source8; + } + + while (Length > 0) { + *(--Destination64) = *(--Source64); + Length -= 8; + } + } + } else if ((((UINTN)DestinationBuffer & 0x3) == 0) && (((UINTN)SourceBuffer & 0x3) == 0) && (Length >= 4)) { + if (SourceBuffer > DestinationBuffer) { + Destination32 = (UINT32*)DestinationBuffer; + Source32 = (CONST UINT32*)SourceBuffer; + while (Length >= 4) { + *(Destination32++) = *(Source32++); + Length -= 4; + } + + // Finish if there are still some bytes to copy + Destination8 = (UINT8*)Destination32; + Source8 = (CONST UINT8*)Source32; + while (Length-- != 0) { + *(Destination8++) = *(Source8++); + } + } else if (SourceBuffer < DestinationBuffer) { + Destination32 = (UINT32*)((UINTN)DestinationBuffer + Length); + Source32 = (CONST UINT32*)((UINTN)SourceBuffer + Length); + + // Destination32 and Source32 were aligned on a 32-bit boundary + // but if length is not a multiple of 4 bytes then they won't be + // anymore. + + Alignment = Length & 0x3; + if (Alignment != 0) { + Destination8 = (UINT8*)Destination32; + Source8 = (CONST UINT8*)Source32; + + while (Alignment-- != 0) { + *(--Destination8) = *(--Source8); + --Length; + } + Destination32 = (UINT32*)Destination8; + Source32 = (CONST UINT32*)Source8; + } - if (SourceBuffer > DestinationBuffer) { - Destination8 = (UINT8*)DestinationBuffer; - Source8 = (CONST UINT8*)SourceBuffer; - while (Length-- != 0) { - *(Destination8++) = *(Source8++); + while (Length > 0) { + *(--Destination32) = *(--Source32); + Length -= 4; + } } - } else if (SourceBuffer < DestinationBuffer) { - Destination8 = (UINT8*)DestinationBuffer + Length; - Source8 = (CONST UINT8*)SourceBuffer + Length; - while (Length-- != 0) { - *(--Destination8) = *(--Source8); + } else { + if (SourceBuffer > DestinationBuffer) { + Destination8 = (UINT8*)DestinationBuffer; + Source8 = (CONST UINT8*)SourceBuffer; + while (Length-- != 0) { + *(Destination8++) = *(Source8++); + } + } else if (SourceBuffer < DestinationBuffer) { + Destination8 = (UINT8*)DestinationBuffer + Length; + Source8 = (CONST UINT8*)SourceBuffer + Length; + while (Length-- != 0) { + *(--Destination8) = *(--Source8); + } } } return DestinationBuffer; diff --git a/MdePkg/Library/BaseMemoryLib/SetMem.c b/MdePkg/Library/BaseMemoryLib/SetMem.c index 5e74085c56f0..b6fb811c388a 100644 --- a/MdePkg/Library/BaseMemoryLib/SetMem.c +++ b/MdePkg/Library/BaseMemoryLib/SetMem.c @@ -5,6 +5,9 @@ is desired. Copyright (c) 2006 - 2010, Intel Corporation. All rights reserved.
+ Copyright (c) 2012 - 2013, ARM Ltd. All rights reserved.
+ Copyright (c) 2016, Linaro Ltd. All rights reserved.
+ This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at @@ -43,11 +46,42 @@ InternalMemSetMem ( // volatile to prevent the optimizer from replacing this function with // the intrinsic memset() // - volatile UINT8 *Pointer; + volatile UINT8 *Pointer8; + volatile UINT32 *Pointer32; + volatile UINT64 *Pointer64; + UINT32 Value32; + UINT64 Value64; + + if ((((UINTN)Buffer & 0x7) == 0) && (Length >= 8)) { + // Generate the 64bit value + Value32 = (Value << 24) | (Value << 16) | (Value << 8) | Value; + Value64 = LShiftU64 (Value32, 32) | Value32; + + Pointer64 = (UINT64*)Buffer; + while (Length >= 8) { + *(Pointer64++) = Value64; + Length -= 8; + } - Pointer = (UINT8*)Buffer; + // Finish with bytes if needed + Pointer8 = (UINT8*)Pointer64; + } else if ((((UINTN)Buffer & 0x3) == 0) && (Length >= 4)) { + // Generate the 32bit value + Value32 = (Value << 24) | (Value << 16) | (Value << 8) | Value; + + Pointer32 = (UINT32*)Buffer; + while (Length >= 4) { + *(Pointer32++) = Value32; + Length -= 4; + } + + // Finish with bytes if needed + Pointer8 = (UINT8*)Pointer32; + } else { + Pointer8 = (UINT8*)Buffer; + } while (Length-- > 0) { - *(Pointer++) = Value; + *(Pointer8++) = Value; } return Buffer; }