From patchwork Thu Jul 4 07:35:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sughosh Ganu X-Patchwork-Id: 809936 Delivered-To: patch@linaro.org Received: by 2002:adf:a199:0:b0:367:895a:4699 with SMTP id u25csp1043446wru; Thu, 4 Jul 2024 00:41:37 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCViXpKkZQmJKcExSgb408D1qdHh8qYs9qkfKhSYAvoz1yGx+TywVt/mQ8d29/p4mqXvkNQZRinLnYUe/t0qLpuh X-Google-Smtp-Source: AGHT+IGO5Fa6ET49z6d7+vfutAcqeJj1wyz2LVtsXVDvJp7HAHIwwevK7U5q747YpSDl3Y83VIfj X-Received: by 2002:a17:906:ce53:b0:a72:4a4f:23d6 with SMTP id a640c23a62f3a-a77ba460b7emr56020566b.8.1720078897454; Thu, 04 Jul 2024 00:41:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1720078897; cv=none; d=google.com; s=arc-20160816; b=hoIJ2lvlaJ7ZeQxCOjM9DtxpBy6weUix0xdYWIb9gurYoUMMlJLcZn6nv7w4YfMFTZ bbXhieG0zCAF7dInsrBjlrgKEFZOiC1JRh1R4Tw3nQ2yanAgycVozblkIwpqNqoermsG h60wMzh0vl/dXDVLjJdG8T2kH2dsd/ESZlvJHuNj9X5OsCuEJGSujcnVvwx3kA9A/cq3 zajb99sryu9j4WJuitxsYDl7WZQcWEiFaENIVJlEDKGCHofZHnqFDPjG6h6zTGKAec+I T/vMFee98/Zg5FQ8EqDhMVWHOJVs+OHLR94I9kMD+lbi+mcnAuRxGH1NDOBWudFBIcpJ 7qeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=no7xEj2FVg061Z8ml8tD1ACpd7TSChRy1dtPBntLAFQ=; fh=ez1zzjhC0XWyqTp7tWutqrA/nFdALDnQKrnQnXYhk5Q=; b=blK+zXSdaWnqUbE3MsWyt0Cbqg4G73Vyw9uetwJ6gvdhC3ijYfaoN+LoN2KWKqldnL IE/YHvBNMxOFej4hbYknIYPDCXCVgnjP+ZTCFCeK1J4ptt0eiXU5WtMO1F3xpNeuhUMO KKsvW+3S8FdyRBdg+vNgVY0XhxDsBZ4aqHToY3SEe/So0wwhxb+tdA5xnhm3dEtATxUv PTXcaaciHeTVHKDjFf72L0I03z1vpG+NmRIF7sSOxp8+V/+ayoa0aUX/sCyWrVnH3/At CmdNt0h3Ez6aceWro32aWZETNnD1SlacbFvRIKLWXdTcNvWRNYxahQVB6uKfoAWMMwNt uYNQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of u-boot-bounces@lists.denx.de designates 2a01:238:438b:c500:173d:9f52:ddab:ee01 as permitted sender) smtp.mailfrom=u-boot-bounces@lists.denx.de; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from phobos.denx.de (phobos.denx.de. [2a01:238:438b:c500:173d:9f52:ddab:ee01]) by mx.google.com with ESMTPS id a640c23a62f3a-a72aaef660asi654344366b.185.2024.07.04.00.41.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jul 2024 00:41:37 -0700 (PDT) Received-SPF: pass (google.com: domain of u-boot-bounces@lists.denx.de designates 2a01:238:438b:c500:173d:9f52:ddab:ee01 as permitted sender) client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; Authentication-Results: mx.google.com; spf=pass (google.com: domain of u-boot-bounces@lists.denx.de designates 2a01:238:438b:c500:173d:9f52:ddab:ee01 as permitted sender) smtp.mailfrom=u-boot-bounces@lists.denx.de; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 5816B8895B; Thu, 4 Jul 2024 09:37:55 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Received: by phobos.denx.de (Postfix, from userid 109) id 2E7368889D; Thu, 4 Jul 2024 09:37:54 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-1.2 required=5.0 tests=BAYES_00, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED,RCVD_IN_VALIDITY_RPBL_BLOCKED, SPF_HELO_NONE,SPF_SOFTFAIL autolearn=no autolearn_force=no version=3.4.2 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by phobos.denx.de (Postfix) with ESMTP id 499B38894E for ; Thu, 4 Jul 2024 09:37:51 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: phobos.denx.de; spf=fail smtp.mailfrom=sughosh.ganu@linaro.org Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CAF75DA7; Thu, 4 Jul 2024 00:38:15 -0700 (PDT) Received: from a079122.blr.arm.com (a079122.arm.com [10.162.17.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A76023F762; Thu, 4 Jul 2024 00:37:47 -0700 (PDT) From: Sughosh Ganu To: u-boot@lists.denx.de Cc: Tom Rini , Ilias Apalodimas , Heinrich Schuchardt , Simon Glass , Marek Vasut , Mark Kettenis , Fabio Estevam , Michal Simek , Sughosh Ganu Subject: [RFC PATCH v2 29/48] test: lmb: tweak the tests for the persistent lmb memory map Date: Thu, 4 Jul 2024 13:05:25 +0530 Message-Id: <20240704073544.670249-30-sughosh.ganu@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240704073544.670249-1-sughosh.ganu@linaro.org> References: <20240704073544.670249-1-sughosh.ganu@linaro.org> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.8 at phobos.denx.de X-Virus-Status: Clean The LMB memory maps are now persistent, with alloced lists being used to keep track of the available and free memory. Make corresponding changes in the test functions so that the list information can be accessed by the tests for checking against expected values. Also introduce functions to initialise and cleanup the lists. These functions will be invoked from every test to start the memory map from a clean slate. Signed-off-by: Sughosh Ganu --- Changes since V1: * Make changes to the lmb_init() function for working with alist based lmb lists. * Add a lmb_uninit() function to clear out the lists. include/lmb.h | 7 ++ lib/lmb.c | 32 ++++++ test/lib/lmb.c | 292 ++++++++++++++++++++++++++++++------------------- 3 files changed, 219 insertions(+), 112 deletions(-) diff --git a/include/lmb.h b/include/lmb.h index cc4cf9f3c8..d9d5e3f3cb 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -13,6 +13,8 @@ * Copyright (C) 2001 Peter Bergner, IBM Corp. */ +struct alist; + /** * enum lmb_flags - definition of memory region attributes * @LMB_NONE: no special request @@ -110,6 +112,11 @@ void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align); */ int lmb_mem_regions_init(void); +#if CONFIG_IS_ENABLED(UT_LMB) +int lmb_init(struct alist **mem_lst, struct alist **used_lst); +void lmb_uninit(struct alist *mem_lst, struct alist *used_lst); +#endif /* UT_LMB */ + #endif /* __KERNEL__ */ #endif /* _LINUX_LMB_H */ diff --git a/lib/lmb.c b/lib/lmb.c index 7cb97e2d42..4480710d73 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -763,3 +763,35 @@ int initr_lmb(void) return ret; } + +#if CONFIG_IS_ENABLED(UT_LMB) +int lmb_init(struct alist **mem_lst, struct alist **used_lst) +{ + bool ret; + + ret = alist_init(&lmb_free_mem, sizeof(struct lmb_region), + (uint)LMB_ALIST_INITIAL_SIZE); + if (!ret) { + log_debug("Unable to initialise the list for LMB free memory\n"); + return -1; + } + + ret = alist_init(&lmb_used_mem, sizeof(struct lmb_region), + (uint)LMB_ALIST_INITIAL_SIZE); + if (!ret) { + log_debug("Unable to initialise the list for LMB used memory\n"); + return -1; + } + + *mem_lst = &lmb_free_mem; + *used_lst = &lmb_used_mem; + + return 0; +} + +void lmb_uninit(struct alist *mem_lst, struct alist *used_lst) +{ + alist_uninit(mem_lst); + alist_uninit(used_lst); +} +#endif /* UT_LMB */ diff --git a/test/lib/lmb.c b/test/lib/lmb.c index a3a7ad904c..50f32793bc 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -3,6 +3,7 @@ * (C) Copyright 2018 Simon Goldschmidt */ +#include #include #include #include @@ -12,52 +13,51 @@ #include #include -extern struct lmb lmb; - -static inline bool lmb_is_nomap(struct lmb_property *m) +static inline bool lmb_is_nomap(struct lmb_region *m) { return m->flags & LMB_NOMAP; } -static int check_lmb(struct unit_test_state *uts, struct lmb *lmb, - phys_addr_t ram_base, phys_size_t ram_size, - unsigned long num_reserved, +static int check_lmb(struct unit_test_state *uts, struct alist *mem_lst, + struct alist *used_lst, phys_addr_t ram_base, + phys_size_t ram_size, unsigned long num_reserved, phys_addr_t base1, phys_size_t size1, phys_addr_t base2, phys_size_t size2, phys_addr_t base3, phys_size_t size3) { + struct lmb_region *mem, *used; + + mem = mem_lst->data; + used = used_lst->data; + if (ram_size) { - ut_asserteq(lmb->memory.cnt, 1); - ut_asserteq(lmb->memory.region[0].base, ram_base); - ut_asserteq(lmb->memory.region[0].size, ram_size); + ut_asserteq(mem_lst->count, 1); + ut_asserteq(mem[0].base, ram_base); + ut_asserteq(mem[0].size, ram_size); } - ut_asserteq(lmb->reserved.cnt, num_reserved); + ut_asserteq(used_lst->count, num_reserved); if (num_reserved > 0) { - ut_asserteq(lmb->reserved.region[0].base, base1); - ut_asserteq(lmb->reserved.region[0].size, size1); + ut_asserteq(used[0].base, base1); + ut_asserteq(used[0].size, size1); } if (num_reserved > 1) { - ut_asserteq(lmb->reserved.region[1].base, base2); - ut_asserteq(lmb->reserved.region[1].size, size2); + ut_asserteq(used[1].base, base2); + ut_asserteq(used[1].size, size2); } if (num_reserved > 2) { - ut_asserteq(lmb->reserved.region[2].base, base3); - ut_asserteq(lmb->reserved.region[2].size, size3); + ut_asserteq(used[2].base, base3); + ut_asserteq(used[2].size, size3); } return 0; } -#define ASSERT_LMB(lmb, ram_base, ram_size, num_reserved, base1, size1, \ +#define ASSERT_LMB(mem_lst, used_lst, ram_base, ram_size, num_reserved, base1, size1, \ base2, size2, base3, size3) \ - ut_assert(!check_lmb(uts, lmb, ram_base, ram_size, \ + ut_assert(!check_lmb(uts, mem_lst, used_lst, ram_base, ram_size, \ num_reserved, base1, size1, base2, size2, base3, \ size3)) -/* - * Test helper function that reserves 64 KiB somewhere in the simulated RAM and - * then does some alloc + free tests. - */ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, const phys_size_t ram_size, const phys_addr_t ram0, const phys_size_t ram0_size, @@ -67,6 +67,8 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, const phys_addr_t alloc_64k_end = alloc_64k_addr + 0x10000; long ret; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; phys_addr_t a, a2, b, b2, c, d; /* check for overflow */ @@ -76,6 +78,10 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_assert(alloc_64k_addr >= ram + 8); ut_assert(alloc_64k_end <= ram_end - 8); + ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + if (ram0_size) { ret = lmb_add(ram0, ram0_size); ut_asserteq(ret, 0); @@ -85,95 +91,97 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_asserteq(ret, 0); if (ram0_size) { - ut_asserteq(lmb.memory.cnt, 2); - ut_asserteq(lmb.memory.region[0].base, ram0); - ut_asserteq(lmb.memory.region[0].size, ram0_size); - ut_asserteq(lmb.memory.region[1].base, ram); - ut_asserteq(lmb.memory.region[1].size, ram_size); + ut_asserteq(mem_lst->count, 2); + ut_asserteq(mem[0].base, ram0); + ut_asserteq(mem[0].size, ram0_size); + ut_asserteq(mem[1].base, ram); + ut_asserteq(mem[1].size, ram_size); } else { - ut_asserteq(lmb.memory.cnt, 1); - ut_asserteq(lmb.memory.region[0].base, ram); - ut_asserteq(lmb.memory.region[0].size, ram_size); + ut_asserteq(mem_lst->count, 1); + ut_asserteq(mem[0].base, ram); + ut_asserteq(mem[0].size, ram_size); } /* reserve 64KiB somewhere */ ret = lmb_reserve(alloc_64k_addr, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); /* allocate somewhere, should be at the end of RAM */ a = lmb_alloc(4, 1); ut_asserteq(a, ram_end - 4); - ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr, 0x10000, ram_end - 4, 4, 0, 0); /* alloc below end of reserved region -> below reserved region */ b = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(b, alloc_64k_addr - 4); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 4, 4, 0, 0); /* 2nd time */ c = lmb_alloc(4, 1); ut_asserteq(c, ram_end - 8); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 8, 8, 0, 0); d = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(d, alloc_64k_addr - 8); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0); ret = lmb_free(a, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); /* allocate again to ensure we get the same address */ a2 = lmb_alloc(4, 1); ut_asserteq(a, a2); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0); ret = lmb_free(a2, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); ret = lmb_free(b, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 3, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4); /* allocate again to ensure we get the same address */ b2 = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(b, b2); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); ret = lmb_free(b2, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 3, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4); ret = lmb_free(c, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, 0, 0); ret = lmb_free(d, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); if (ram0_size) { - ut_asserteq(lmb.memory.cnt, 2); - ut_asserteq(lmb.memory.region[0].base, ram0); - ut_asserteq(lmb.memory.region[0].size, ram0_size); - ut_asserteq(lmb.memory.region[1].base, ram); - ut_asserteq(lmb.memory.region[1].size, ram_size); + ut_asserteq(mem_lst->count, 2); + ut_asserteq(mem[0].base, ram0); + ut_asserteq(mem[0].size, ram0_size); + ut_asserteq(mem[1].base, ram); + ut_asserteq(mem[1].size, ram_size); } else { - ut_asserteq(lmb.memory.cnt, 1); - ut_asserteq(lmb.memory.region[0].base, ram); - ut_asserteq(lmb.memory.region[0].size, ram_size); + ut_asserteq(mem_lst->count, 1); + ut_asserteq(mem[0].base, ram); + ut_asserteq(mem[0].size, ram_size); } + lmb_uninit(mem_lst, used_lst); + return 0; } @@ -228,45 +236,53 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) const phys_size_t big_block_size = 0x10000000; const phys_addr_t ram_end = ram + ram_size; const phys_addr_t alloc_64k_addr = ram + 0x10000000; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; long ret; phys_addr_t a, b; /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram); + ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); /* reserve 64KiB in the middle of RAM */ ret = lmb_reserve(alloc_64k_addr, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); /* allocate a big block, should be below reserved */ a = lmb_alloc(big_block_size, 1); ut_asserteq(a, ram); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0); /* allocate 2nd big block */ /* This should fail, printing an error */ b = lmb_alloc(big_block_size, 1); ut_asserteq(b, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0); ret = lmb_free(a, big_block_size); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); /* allocate too big block */ /* This should fail, printing an error */ a = lmb_alloc(ram_size, 1); ut_asserteq(a, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); + lmb_uninit(mem_lst, used_lst); + return 0; } @@ -292,51 +308,62 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, const phys_addr_t ram_end = ram + ram_size; long ret; phys_addr_t a, b; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; const phys_addr_t alloc_size_aligned = (alloc_size + align - 1) & ~(align - 1); /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram); + ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); /* allocate a block */ a = lmb_alloc(alloc_size, align); ut_assert(a != 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, - alloc_size, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, + ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); + /* allocate another block */ b = lmb_alloc(alloc_size, align); ut_assert(b != 0); if (alloc_size == alloc_size_aligned) { - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram + ram_size - (alloc_size_aligned * 2), alloc_size * 2, 0, 0, 0, 0); } else { - ASSERT_LMB(&lmb, ram, ram_size, 2, ram + ram_size - + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram + ram_size - (alloc_size_aligned * 2), alloc_size, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0); } /* and free them */ ret = lmb_free(b, alloc_size); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, + ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); ret = lmb_free(a, alloc_size); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); /* allocate a block with base*/ b = lmb_alloc_base(alloc_size, align, ram_end); ut_assert(a == b); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, + ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* and free it */ ret = lmb_free(b, alloc_size); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + + lmb_uninit(mem_lst, used_lst); return 0; } @@ -378,33 +405,41 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts) { const phys_addr_t ram = 0; const phys_size_t ram_size = 0x20000000; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; long ret; phys_addr_t a, b; + ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); /* allocate nearly everything */ a = lmb_alloc(ram_size - 4, 1); ut_asserteq(a, ram + 4); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* allocate the rest */ /* This should fail as the allocated address would be 0 */ b = lmb_alloc(4, 1); ut_asserteq(b, 0); /* check that this was an error by checking lmb */ - ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* check that this was an error by freeing b */ ret = lmb_free(b, 4); ut_asserteq(ret, -1); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); ret = lmb_free(a, ram_size - 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + + lmb_uninit(mem_lst, used_lst); return 0; } @@ -415,42 +450,51 @@ static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; long ret; + ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); ret = lmb_reserve(0x40010000, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); - /* allocate overlapping region should fail */ + /* allocate overlapping region should return the coalesced count */ ret = lmb_reserve(0x40011000, 0x10000); - ut_asserteq(ret, -1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ut_asserteq(ret, 1); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x11000, 0, 0, 0, 0); /* allocate 3nd region */ ret = lmb_reserve(0x40030000, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40010000, 0x11000, 0x40030000, 0x10000, 0, 0); /* allocate 2nd region , This should coalesced all region into one */ ret = lmb_reserve(0x40020000, 0x10000); ut_assert(ret >= 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x30000, 0, 0, 0, 0); /* allocate 2nd region, which should be added as first region */ ret = lmb_reserve(0x40000000, 0x8000); ut_assert(ret >= 0); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x8000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40000000, 0x8000, 0x40010000, 0x30000, 0, 0); /* allocate 3rd region, coalesce with first and overlap with second */ ret = lmb_reserve(0x40008000, 0x10000); ut_assert(ret >= 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x40000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40000000, 0x40000, 0, 0, 0, 0); + + lmb_uninit(mem_lst, used_lst); + return 0; } LIB_TEST(lib_test_lmb_overlapping_reserve, 0); @@ -461,6 +505,8 @@ LIB_TEST(lib_test_lmb_overlapping_reserve, 0); */ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) { + struct lmb_region *mem, *used; + struct alist *mem_lst, *used_lst; const phys_size_t ram_size = 0x20000000; const phys_addr_t ram_end = ram + ram_size; const phys_size_t alloc_addr_a = ram + 0x8000000; @@ -472,6 +518,10 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram); + ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); @@ -482,34 +532,34 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) ut_asserteq(ret, 0); ret = lmb_reserve(alloc_addr_c, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000); /* allocate blocks */ a = lmb_alloc_addr(ram, alloc_addr_a - ram); ut_asserteq(a, ram); - ASSERT_LMB(&lmb, ram, ram_size, 3, ram, 0x8010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, ram, 0x8010000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000); b = lmb_alloc_addr(alloc_addr_a + 0x10000, alloc_addr_b - alloc_addr_a - 0x10000); ut_asserteq(b, alloc_addr_a + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x10010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram, 0x10010000, alloc_addr_c, 0x10000, 0, 0); c = lmb_alloc_addr(alloc_addr_b + 0x10000, alloc_addr_c - alloc_addr_b - 0x10000); ut_asserteq(c, alloc_addr_b + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0); d = lmb_alloc_addr(alloc_addr_c + 0x10000, ram_end - alloc_addr_c - 0x10000); ut_asserteq(d, alloc_addr_c + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0); /* allocating anything else should fail */ e = lmb_alloc(1, 1); ut_asserteq(e, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0); ret = lmb_free(d, ram_end - alloc_addr_c - 0x10000); @@ -519,39 +569,39 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) d = lmb_alloc_addr(ram_end - 4, 4); ut_asserteq(d, ram_end - 4); - ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); ret = lmb_free(d, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0); d = lmb_alloc_addr(ram_end - 128, 4); ut_asserteq(d, ram_end - 128); - ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); ret = lmb_free(d, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0); d = lmb_alloc_addr(alloc_addr_c + 0x10000, 4); ut_asserteq(d, alloc_addr_c + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010004, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010004, 0, 0, 0, 0); ret = lmb_free(d, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0); /* allocate at the bottom */ ret = lmb_free(a, alloc_addr_a - ram); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + 0x8000000, 0x10010000, - 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram + 0x8000000, + 0x10010000, 0, 0, 0, 0); d = lmb_alloc_addr(ram, 4); ut_asserteq(d, ram); - ASSERT_LMB(&lmb, ram, ram_size, 2, d, 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, d, 4, ram + 0x8000000, 0x10010000, 0, 0); /* check that allocating outside memory fails */ @@ -564,6 +614,8 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) ut_asserteq(ret, 0); } + lmb_uninit(mem_lst, used_lst); + return 0; } @@ -585,6 +637,8 @@ LIB_TEST(lib_test_lmb_alloc_addr, 0); static int test_get_unreserved_size(struct unit_test_state *uts, const phys_addr_t ram) { + struct lmb_region *mem, *used; + struct alist *mem_lst, *used_lst; const phys_size_t ram_size = 0x20000000; const phys_addr_t ram_end = ram + ram_size; const phys_size_t alloc_addr_a = ram + 0x8000000; @@ -596,6 +650,10 @@ static int test_get_unreserved_size(struct unit_test_state *uts, /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram); + ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); @@ -606,7 +664,7 @@ static int test_get_unreserved_size(struct unit_test_state *uts, ut_asserteq(ret, 0); ret = lmb_reserve(alloc_addr_c, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000); /* check addresses in between blocks */ @@ -631,6 +689,8 @@ static int test_get_unreserved_size(struct unit_test_state *uts, s = lmb_get_free_size(ram_end - 4); ut_asserteq(s, 4); + lmb_uninit(mem_lst, used_lst); + return 0; } @@ -650,83 +710,91 @@ LIB_TEST(lib_test_lmb_get_free_size, 0); static int lib_test_lmb_flags(struct unit_test_state *uts) { + struct lmb_region *mem, *used; + struct alist *mem_lst, *used_lst; const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; long ret; + ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); /* reserve, same flag */ ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); /* reserve again, same flag */ ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); /* reserve again, new flag */ ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NONE); ut_asserteq(ret, -1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + ut_asserteq(lmb_is_nomap(&used[0]), 1); /* merge after */ ret = lmb_reserve_flags(0x40020000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x20000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x20000, 0, 0, 0, 0); /* merge before */ ret = lmb_reserve_flags(0x40000000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40000000, 0x30000, 0, 0, 0, 0); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + ut_asserteq(lmb_is_nomap(&used[0]), 1); ret = lmb_reserve_flags(0x40030000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x10000, 0, 0); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); + ut_asserteq(lmb_is_nomap(&used[0]), 1); + ut_asserteq(lmb_is_nomap(&used[1]), 0); /* test that old API use LMB_NONE */ ret = lmb_reserve(0x40040000, 0x10000); ut_asserteq(ret, 1); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x20000, 0, 0); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); + ut_asserteq(lmb_is_nomap(&used[0]), 1); + ut_asserteq(lmb_is_nomap(&used[1]), 0); ret = lmb_reserve_flags(0x40070000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40070000, 0x10000); ret = lmb_reserve_flags(0x40050000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 4, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 4, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x10000); /* merge with 2 adjacent regions */ ret = lmb_reserve_flags(0x40060000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 2); - ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x30000); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[2]), 1); + ut_asserteq(lmb_is_nomap(&used[0]), 1); + ut_asserteq(lmb_is_nomap(&used[1]), 0); + ut_asserteq(lmb_is_nomap(&used[2]), 1); + + lmb_uninit(mem_lst, used_lst); return 0; }