From patchwork Fri Oct 17 15:31:18 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Newton X-Patchwork-Id: 38881 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f69.google.com (mail-wg0-f69.google.com [74.125.82.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E005C2054E for ; Fri, 17 Oct 2014 15:32:10 +0000 (UTC) Received: by mail-wg0-f69.google.com with SMTP id b13sf618686wgh.0 for ; Fri, 17 Oct 2014 08:32:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:mailing-list :precedence:list-id:list-unsubscribe:list-subscribe:list-archive :list-post:list-help:sender:delivered-to:from:to:subject:date :message-id:in-reply-to:references:x-original-sender :x-original-authentication-results; bh=NApZwRPewBDWClVqSaEPDlObS6WFqHuXqFbVAKAiLTY=; b=iD89n4CAAA5EMgDAmvD9SHPDbQaeG4/F98OiV/47ZWJ2NC4yf45YYpocf1QB2RPR+S 6KaIdt4WVQNHyzF91swCNBUKz7DzqI57eVeT6+riRstHn52K7l4i7AbtT6Wwxv1qq71h GV2Nj9ji1fFT9pgPiv3LbasnD8UbvLhl8OrmHZ4nP0yYISuscazoQQk1UMRZaAsOzyyO Jcg8IQi6Swyn1N7Zjk/2GS9ehd2QUjC1aTOyT8YdDSTQ5D0tw5WhZVD7yKOB/bp8cQmj mFnRaBFY9BkGCpJX1VGy1qGLzXXAmqR8FxVNaXHUE2J6Jq4CnC692NLdYLKAhBayQLwa Zx+g== X-Gm-Message-State: ALoCoQmsO9Z5VxOFerEvM6vQEncp7BHVTacflFH65hmv7TZTRiBJstfxXk1DCPSjsDXRNMY7pvIi X-Received: by 10.194.86.66 with SMTP id n2mr14007wjz.7.1413559930024; Fri, 17 Oct 2014 08:32:10 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.153.11.132 with SMTP id ei4ls285729lad.109.gmail; Fri, 17 Oct 2014 08:32:09 -0700 (PDT) X-Received: by 10.112.89.8 with SMTP id bk8mr9816222lbb.30.1413559929857; Fri, 17 Oct 2014 08:32:09 -0700 (PDT) Received: from mail-lb0-x22a.google.com (mail-lb0-x22a.google.com. [2a00:1450:4010:c04::22a]) by mx.google.com with ESMTPS id qh5si2526758lbb.111.2014.10.17.08.32.09 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 17 Oct 2014 08:32:09 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22a as permitted sender) client-ip=2a00:1450:4010:c04::22a; Received: by mail-lb0-f170.google.com with SMTP id u10so886100lbd.29 for ; Fri, 17 Oct 2014 08:32:09 -0700 (PDT) X-Received: by 10.112.97.135 with SMTP id ea7mr9733072lbb.46.1413559929747; Fri, 17 Oct 2014 08:32:09 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.84.229 with SMTP id c5csp249802lbz; Fri, 17 Oct 2014 08:32:08 -0700 (PDT) X-Received: by 10.70.140.70 with SMTP id re6mr4305315pdb.149.1413559927931; Fri, 17 Oct 2014 08:32:07 -0700 (PDT) Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id wl10si1353831pbc.211.2014.10.17.08.32.07 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Oct 2014 08:32:07 -0700 (PDT) Received-SPF: pass (google.com: domain of libc-alpha-return-53594-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Received: (qmail 16615 invoked by alias); 17 Oct 2014 15:31:38 -0000 Mailing-List: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org Precedence: list List-Id: List-Unsubscribe: , List-Subscribe: List-Archive: List-Post: , List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 16518 invoked by uid 89); 17 Oct 2014 15:31:37 -0000 X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.2 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-wi0-f173.google.com X-Received: by 10.194.2.8 with SMTP id 8mr11346651wjq.85.1413559890086; Fri, 17 Oct 2014 08:31:30 -0700 (PDT) From: Will Newton To: libc-alpha@sourceware.org Subject: [PATCH 1/5] benchtests: Add malloc microbenchmark Date: Fri, 17 Oct 2014 16:31:18 +0100 Message-Id: <1413559882-959-2-git-send-email-will.newton@linaro.org> In-Reply-To: <1413559882-959-1-git-send-email-will.newton@linaro.org> References: <1413559882-959-1-git-send-email-will.newton@linaro.org> X-Original-Sender: will.newton@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22a as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=pass header.i=@sourceware.org X-Google-Group-Id: 836684582541 Add a microbenchmark for measuring malloc and free performance with varying numbers of threads. The benchmark allocates and frees buffers of random sizes in a random order and measures the overall execution time and RSS. Variants of the benchmark are run with 1, 8, 16 and 32 threads. The random block sizes used follow an inverse square distribution which is intended to mimic the behaviour of real applications which tend to allocate many more small blocks than large ones. ChangeLog: 2014-06-19 Will Newton * benchtests/Makefile: (bench-malloc): Add malloc thread scalability benchmark. * benchtests/bench-malloc-threads.c: New file. --- benchtests/Makefile | 20 ++- benchtests/bench-malloc-thread.c | 302 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 319 insertions(+), 3 deletions(-) create mode 100644 benchtests/bench-malloc-thread.c diff --git a/benchtests/Makefile b/benchtests/Makefile index fd3036d..78fd48f 100644 --- a/benchtests/Makefile +++ b/benchtests/Makefile @@ -44,8 +44,11 @@ benchset := $(string-bench-all) $(stdlib-bench) CFLAGS-bench-ffs.c += -fno-builtin CFLAGS-bench-ffsll.c += -fno-builtin +bench-malloc := malloc-thread + $(addprefix $(objpfx)bench-,$(bench-math)): $(libm) $(addprefix $(objpfx)bench-,$(bench-pthread)): $(shared-thread-library) +$(objpfx)bench-malloc-thread: $(shared-thread-library) @@ -60,6 +63,7 @@ include ../Rules binaries-bench := $(addprefix $(objpfx)bench-,$(bench)) binaries-benchset := $(addprefix $(objpfx)bench-,$(benchset)) +binaries-bench-malloc := $(addprefix $(objpfx)bench-,$(bench-malloc)) # The default duration: 10 seconds. ifndef BENCH_DURATION @@ -82,7 +86,8 @@ endif # This makes sure CPPFLAGS-nonlib and CFLAGS-nonlib are passed # for all these modules. -cpp-srcs-left := $(binaries-benchset:=.c) $(binaries-bench:=.c) +cpp-srcs-left := $(binaries-benchset:=.c) $(binaries-bench:=.c) \ + $(binaries-bench-malloc:=.c) lib := nonlib include $(patsubst %,$(..)cppflags-iterator.mk,$(cpp-srcs-left)) @@ -99,9 +104,10 @@ timing-type := $(objpfx)bench-timing-type bench-clean: rm -f $(binaries-bench) $(addsuffix .o,$(binaries-bench)) rm -f $(binaries-benchset) $(addsuffix .o,$(binaries-benchset)) + rm -f $(binaries-bench-malloc) $(addsuffix .o,$(binaries-bench-malloc)) rm -f $(timing-type) $(addsuffix .o,$(timing-type)) -bench: $(timing-type) bench-set bench-func +bench: $(timing-type) bench-set bench-func bench-malloc bench-set: $(binaries-benchset) for run in $^; do \ @@ -109,6 +115,13 @@ bench-set: $(binaries-benchset) $(run-bench) > $${run}.out; \ done +bench-malloc: $(binaries-bench-malloc) + run=$(objpfx)bench-malloc-thread; \ + for thr in 1 8 16 32; do \ + echo "Running $${run} $${thr}"; \ + $(run-bench) $${thr} > $${run}-$${thr}.out; \ + done + # Build and execute the benchmark functions. This target generates JSON # formatted bench.out. Each of the programs produce independent JSON output, # so one could even execute them individually and process it using any JSON @@ -134,7 +147,8 @@ bench-func: $(binaries-bench) scripts/validate_benchout.py $(objpfx)bench.out \ scripts/benchout.schema.json -$(timing-type) $(binaries-bench) $(binaries-benchset): %: %.o $(objpfx)json-lib.o \ +$(timing-type) $(binaries-bench) $(binaries-benchset) \ + $(binaries-bench-malloc): %: %.o $(objpfx)json-lib.o \ $(sort $(filter $(common-objpfx)lib%,$(link-libc))) \ $(addprefix $(csu-objpfx),start.o) $(+preinit) $(+postinit) $(+link) diff --git a/benchtests/bench-malloc-thread.c b/benchtests/bench-malloc-thread.c new file mode 100644 index 0000000..d8798cf --- /dev/null +++ b/benchtests/bench-malloc-thread.c @@ -0,0 +1,302 @@ +/* Benchmark malloc and free functions. + Copyright (C) 2013-2014 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "bench-timing.h" +#include "json-lib.h" + +/* Benchmark duration in seconds. */ +#define BENCHMARK_DURATION 60 +#define RAND_SEED 88 + +#ifndef NUM_THREADS +# define NUM_THREADS 1 +#endif + +/* Maximum memory that can be allocated at any one time is: + + NUM_THREADS * WORKING_SET_SIZE * MAX_ALLOCATION_SIZE + + However due to the distribution of the random block sizes + the typical amount allocated will be much smaller. */ +#define WORKING_SET_SIZE 1024 + +#define MIN_ALLOCATION_SIZE 4 +#define MAX_ALLOCATION_SIZE 32768 + +/* Get a random block size with an inverse square distribution. */ +static unsigned int +get_block_size (unsigned int rand_data) +{ + /* Inverse square. */ + const float exponent = -2; + /* Minimum value of distribution. */ + const float dist_min = MIN_ALLOCATION_SIZE; + /* Maximum value of distribution. */ + const float dist_max = MAX_ALLOCATION_SIZE; + + float min_pow = powf (dist_min, exponent + 1); + float max_pow = powf (dist_max, exponent + 1); + + float r = (float) rand_data / RAND_MAX; + + return (unsigned int) powf ((max_pow - min_pow) * r + min_pow, + 1 / (exponent + 1)); +} + +#define NUM_BLOCK_SIZES 8000 +#define NUM_OFFSETS ((WORKING_SET_SIZE) * 4) + +static unsigned int random_block_sizes[NUM_BLOCK_SIZES]; +static unsigned int random_offsets[NUM_OFFSETS]; + +static void +init_random_values (void) +{ + for (size_t i = 0; i < NUM_BLOCK_SIZES; i++) + random_block_sizes[i] = get_block_size (rand ()); + + for (size_t i = 0; i < NUM_OFFSETS; i++) + random_offsets[i] = rand () % WORKING_SET_SIZE; +} + +static unsigned int +get_random_block_size (unsigned int *state) +{ + unsigned int idx = *state; + + if (idx >= NUM_BLOCK_SIZES - 1) + idx = 0; + else + idx++; + + *state = idx; + + return random_block_sizes[idx]; +} + +static unsigned int +get_random_offset (unsigned int *state) +{ + unsigned int idx = *state; + + if (idx >= NUM_OFFSETS - 1) + idx = 0; + else + idx++; + + *state = idx; + + return random_offsets[idx]; +} + +static volatile bool timeout; + +static void +alarm_handler (int signum) +{ + timeout = true; +} + +/* Allocate and free blocks in a random order. */ +static size_t +malloc_benchmark_loop (void **ptr_arr) +{ + unsigned int offset_state = 0, block_state = 0; + size_t iters = 0; + + while (!timeout) + { + unsigned int next_idx = get_random_offset (&offset_state); + unsigned int next_block = get_random_block_size (&block_state); + + free (ptr_arr[next_idx]); + + ptr_arr[next_idx] = malloc (next_block); + + iters++; + } + + return iters; +} + +struct thread_args +{ + size_t iters; + void **working_set; + timing_t elapsed; +}; + +static void * +benchmark_thread (void *arg) +{ + struct thread_args *args = (struct thread_args *) arg; + size_t iters; + void *thread_set = args->working_set; + timing_t start, stop; + + TIMING_NOW (start); + iters = malloc_benchmark_loop (thread_set); + TIMING_NOW (stop); + + TIMING_DIFF (args->elapsed, start, stop); + args->iters = iters; + + return NULL; +} + +static timing_t +do_benchmark (size_t num_threads, size_t *iters) +{ + timing_t elapsed = 0; + + if (num_threads == 1) + { + timing_t start, stop; + void *working_set[WORKING_SET_SIZE]; + + memset (working_set, 0, sizeof (working_set)); + + TIMING_NOW (start); + *iters = malloc_benchmark_loop (working_set); + TIMING_NOW (stop); + + TIMING_DIFF (elapsed, start, stop); + } + else + { + struct thread_args args[num_threads]; + void *working_set[num_threads][WORKING_SET_SIZE]; + pthread_t threads[num_threads]; + + memset (working_set, 0, sizeof (working_set)); + + *iters = 0; + + for (size_t i = 0; i < num_threads; i++) + { + args[i].working_set = working_set[i]; + pthread_create(&threads[i], NULL, benchmark_thread, &args[i]); + } + + for (size_t i = 0; i < num_threads; i++) + { + pthread_join(threads[i], NULL); + TIMING_ACCUM (elapsed, args[i].elapsed); + *iters += args[i].iters; + } + } + return elapsed; +} + +static void usage(const char *name) +{ + fprintf (stderr, "%s: \n", name); + exit (1); +} + +int +main (int argc, char **argv) +{ + timing_t cur; + size_t iters = 0, num_threads = 1; + unsigned long res; + json_ctx_t json_ctx; + double d_total_s, d_total_i; + struct sigaction act; + + if (argc == 1) + num_threads = 1; + else if (argc == 2) + { + long ret; + + errno = 0; + ret = strtol(argv[1], NULL, 10); + + if (errno || ret == 0) + usage(argv[0]); + + num_threads = ret; + } + else + usage(argv[0]); + + init_random_values (); + + json_init (&json_ctx, 0, stdout); + + json_document_begin (&json_ctx); + + json_attr_string (&json_ctx, "timing_type", TIMING_TYPE); + + json_attr_object_begin (&json_ctx, "functions"); + + json_attr_object_begin (&json_ctx, "malloc"); + + json_attr_object_begin (&json_ctx, ""); + + TIMING_INIT (res); + + (void) res; + + memset (&act, 0, sizeof (act)); + act.sa_handler = &alarm_handler; + + sigaction (SIGALRM, &act, NULL); + + alarm (BENCHMARK_DURATION); + + cur = do_benchmark (num_threads, &iters); + + struct rusage usage; + getrusage(RUSAGE_SELF, &usage); + + d_total_s = cur; + d_total_i = iters; + + json_attr_double (&json_ctx, "duration", d_total_s); + json_attr_double (&json_ctx, "iterations", d_total_i); + json_attr_double (&json_ctx, "time_per_iteration", d_total_s / d_total_i); + json_attr_double (&json_ctx, "max_rss", usage.ru_maxrss); + + json_attr_double (&json_ctx, "threads", num_threads); + json_attr_double (&json_ctx, "min_size", MIN_ALLOCATION_SIZE); + json_attr_double (&json_ctx, "max_size", MAX_ALLOCATION_SIZE); + json_attr_double (&json_ctx, "random_seed", RAND_SEED); + + json_attr_object_end (&json_ctx); + + json_attr_object_end (&json_ctx); + + json_attr_object_end (&json_ctx); + + json_document_end (&json_ctx); + + return 0; +}