From patchwork Thu May 21 16:43:13 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srinivas Kandagatla X-Patchwork-Id: 48866 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f72.google.com (mail-wg0-f72.google.com [74.125.82.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C9DCA21411 for ; Thu, 21 May 2015 16:44:02 +0000 (UTC) Received: by wgcz3 with SMTP id z3sf22190436wgc.3 for ; Thu, 21 May 2015 09:44:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=/Hmthsm+G2GDUGoAMsXiSZGoPZ8czsBRi6eTvglzlM8=; b=BbbpRGFw3lbpravIh6LnTGmrj0kzDNB0FlVlyp29Is+sRdiv//jQC8WsCtGzqPrwkE nHQIYBV6ANPDr8flnbBnJMUiREA+2hbnUi16opxnhEpfJ3VAXz2Vsdq6RBZKKuVBNcgW dh6JzJ23hJ8bsb4rm4Jsia/z1kNdhIcgcp0Hy5k8/1wEZu2N9JPQ4u45ENsi7FwgYRZE 5jY8BcE6ve5diSAdoifyRzDZMqwnjUDTc7Vvp9Oe3zHO4Efe2sglWpWb4RLW+MoVRs2k ST1rUuHpfljfoOwXP5ccpfztJli5fXCHg5VRY20JcuKv7c/PoHTdO8s9brMJx4Sm10D1 6SwA== X-Gm-Message-State: ALoCoQkFOYX2k945Js2DnTNfUVs8qmqEllAAv6RryigcKxutGyhFO0d0I4soQgtLAV5B8uErLiFg X-Received: by 10.180.19.72 with SMTP id c8mr3485438wie.2.1432226642130; Thu, 21 May 2015 09:44:02 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.36.34 with SMTP id n2ls349702laj.15.gmail; Thu, 21 May 2015 09:44:01 -0700 (PDT) X-Received: by 10.152.36.65 with SMTP id o1mr3053210laj.55.1432226641909; Thu, 21 May 2015 09:44:01 -0700 (PDT) Received: from mail-lb0-f172.google.com (mail-lb0-f172.google.com. [209.85.217.172]) by mx.google.com with ESMTPS id 3si13603660lav.0.2015.05.21.09.44.01 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 May 2015 09:44:01 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.172 as permitted sender) client-ip=209.85.217.172; Received: by lbbzk7 with SMTP id zk7so22443785lbb.0 for ; Thu, 21 May 2015 09:44:01 -0700 (PDT) X-Received: by 10.152.4.72 with SMTP id i8mr3103380lai.32.1432226641450; Thu, 21 May 2015 09:44:01 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp722440lbb; Thu, 21 May 2015 09:43:59 -0700 (PDT) X-Received: by 10.66.249.1 with SMTP id yq1mr7104473pac.3.1432226636323; Thu, 21 May 2015 09:43:56 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id pe8si7072535pac.157.2015.05.21.09.43.55; Thu, 21 May 2015 09:43:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756706AbbEUQnX (ORCPT + 7 others); Thu, 21 May 2015 12:43:23 -0400 Received: from mail-wg0-f43.google.com ([74.125.82.43]:35088 "EHLO mail-wg0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756669AbbEUQnU (ORCPT ); Thu, 21 May 2015 12:43:20 -0400 Received: by wgfl8 with SMTP id l8so91548968wgf.2 for ; Thu, 21 May 2015 09:43:18 -0700 (PDT) X-Received: by 10.194.24.70 with SMTP id s6mr1222284wjf.25.1432226598696; Thu, 21 May 2015 09:43:18 -0700 (PDT) Received: from srini-ThinkPad-X1-Carbon-2nd.dlink.com (host-2-96-95-164.as13285.net. [2.96.95.164]) by mx.google.com with ESMTPSA id u9sm32925398wjx.15.2015.05.21.09.43.15 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 21 May 2015 09:43:17 -0700 (PDT) From: Srinivas Kandagatla To: linux-arm-kernel@lists.infradead.org Cc: Maxime Ripard , Rob Herring , Kumar Gala , Mark Brown , s.hauer@pengutronix.de, Greg Kroah-Hartman , linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-arm-msm@vger.kernel.org, arnd@arndb.de, sboyd@codeaurora.org, Srinivas Kandagatla , pantelis.antoniou@konsulko.com, mporter@konsulko.com Subject: [PATCH v5 04/11] nvmem: Add a simple NVMEM framework for consumers Date: Thu, 21 May 2015 17:43:13 +0100 Message-Id: <1432226593-8817-1-git-send-email-srinivas.kandagatla@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1432226535-8640-1-git-send-email-srinivas.kandagatla@linaro.org> References: <1432226535-8640-1-git-send-email-srinivas.kandagatla@linaro.org> Sender: devicetree-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: devicetree@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: srinivas.kandagatla@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch adds just consumers part of the framework just to enable easy review. Up until now, nvmem drivers were stored in drivers/misc, where they all had to duplicate pretty much the same code to register a sysfs file, allow in-kernel users to access the content of the devices they were driving, etc. This was also a problem as far as other in-kernel users were involved, since the solutions used were pretty much different from on driver to another, there was a rather big abstraction leak. This introduction of this framework aims at solving this. It also introduces DT representation for consumer devices to go get the data they require (MAC Addresses, SoC/Revision ID, part numbers, and so on) from the nvmems. Having regmap interface to this framework would give much better abstraction for nvmems on different buses. Signed-off-by: Maxime Ripard [Maxime Ripard: intial version of the framework] Signed-off-by: Srinivas Kandagatla --- drivers/nvmem/core.c | 346 +++++++++++++++++++++++++++++++++++++++++ include/linux/nvmem-consumer.h | 49 ++++++ 2 files changed, 395 insertions(+) create mode 100644 include/linux/nvmem-consumer.h diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index 6c2f0b1..8a4b358 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -16,6 +16,7 @@ #include #include +#include #include #include #include @@ -379,6 +380,351 @@ int nvmem_unregister(struct nvmem_device *nvmem) } EXPORT_SYMBOL_GPL(nvmem_unregister); +static struct nvmem_device *__nvmem_device_get(struct device_node *np, + struct nvmem_cell **cellp, + const char *cell_id) +{ + struct nvmem_device *nvmem = NULL; + + mutex_lock(&nvmem_mutex); + + if (np) { + nvmem = of_nvmem_find(np); + if (!nvmem) { + mutex_unlock(&nvmem_mutex); + return ERR_PTR(-EPROBE_DEFER); + } + } else { + struct nvmem_cell *cell = nvmem_find_cell(cell_id); + + if (cell) { + nvmem = cell->nvmem; + *cellp = cell; + } + + if (!nvmem) { + mutex_unlock(&nvmem_mutex); + return ERR_PTR(-ENOENT); + } + } + + nvmem->users++; + mutex_unlock(&nvmem_mutex); + + if (!try_module_get(nvmem->owner)) { + dev_err(&nvmem->dev, + "could not increase module refcount for cell %s\n", + nvmem->name); + + mutex_lock(&nvmem_mutex); + nvmem->users--; + mutex_unlock(&nvmem_mutex); + + return ERR_PTR(-EINVAL); + } + + return nvmem; +} + +static int __nvmem_device_put(struct nvmem_device *nvmem) +{ + module_put(nvmem->owner); + mutex_lock(&nvmem_mutex); + nvmem->users--; + mutex_unlock(&nvmem_mutex); + + return 0; +} + +static struct nvmem_cell *nvmem_cell_get_from_list(const char *cell_id) +{ + struct nvmem_cell *cell = NULL; + struct nvmem_device *nvmem; + + nvmem = __nvmem_device_get(NULL, &cell, cell_id); + if (IS_ERR(nvmem)) + return (struct nvmem_cell *)nvmem; + + + return cell; + +} + +static struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, + const char *name) +{ + struct device_node *cell_np, *nvmem_np; + struct nvmem_cell *cell; + struct nvmem_device *nvmem; + const __be32 *addr; + int rval, len, index; + + index = of_property_match_string(np, "nvmem-cell-names", name); + + cell_np = of_parse_phandle(np, "nvmem-cell", index); + if (!cell_np) + return ERR_PTR(-EINVAL); + + nvmem_np = of_get_next_parent(cell_np); + if (!nvmem_np) + return ERR_PTR(-EINVAL); + + nvmem = __nvmem_device_get(nvmem_np, NULL, NULL); + if (IS_ERR(nvmem)) + return (struct nvmem_cell *)nvmem; + + addr = of_get_property(cell_np, "reg", &len); + if (!addr || (len < 2 * sizeof(int))) { + dev_err(&nvmem->dev, "of_i2c: invalid reg on %s\n", + cell_np->full_name); + rval = -EINVAL; + goto err_mem; + } + + cell = kzalloc(sizeof(*cell), GFP_KERNEL); + if (!cell) { + rval = -ENOMEM; + goto err_mem; + } + + cell->nvmem = nvmem; + cell->offset = be32_to_cpup(addr++); + cell->bytes = be32_to_cpup(addr); + cell->name = cell_np->name; + + of_property_read_u32(cell_np, "bit-offset", &cell->bit_offset); + of_property_read_u32(cell_np, "nbits", &cell->nbits); + + if (cell->nbits) + cell->bytes = DIV_ROUND_UP(cell->nbits + cell->bit_offset, + BITS_PER_BYTE); + + if (!IS_ALIGNED(cell->offset, nvmem->stride)) { + dev_err(&nvmem->dev, + "cell %s unaligned to nvmem stride %d\n", + cell->name, nvmem->stride); + rval = -EINVAL; + goto err_sanity; + } + + nvmem_cell_add(cell); + + return cell; + +err_sanity: + kfree(cell); + +err_mem: + __nvmem_device_put(nvmem); + + return ERR_PTR(rval); + +} + +/** + * nvmem_cell_get(): Get nvmem cell of device form a given index + * + * @dev node: Device tree node that uses the nvmem cell + * @index: nvmem index in nvmems property. + * + * The return value will be an ERR_PTR() on error or a valid pointer + * to a struct nvmem_cell. The nvmem_cell will be freed by the + * nvmem_cell_put(). + */ +struct nvmem_cell *nvmem_cell_get(struct device *dev, const char *cell_id) +{ + struct nvmem_cell *cell; + + if (dev->of_node) { /* try dt first */ + cell = of_nvmem_cell_get(dev->of_node, cell_id); + if (!IS_ERR(cell) || PTR_ERR(cell) == -EPROBE_DEFER) + return cell; + } + + return nvmem_cell_get_from_list(cell_id); + +} +EXPORT_SYMBOL_GPL(nvmem_cell_get); + +/** + * nvmem_cell_put(): Release previously allocated nvmem cell. + * + * @cell: Previously allocated nvmem cell by nvmem_cell_get() + * or nvmem_cell_get(). + */ +void nvmem_cell_put(struct nvmem_cell *cell) +{ + struct nvmem_device *nvmem = cell->nvmem; + + __nvmem_device_put(nvmem); + nvmem_cell_drop(cell); +} +EXPORT_SYMBOL_GPL(nvmem_cell_put); + +static inline void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, + void *buf) +{ + u8 *p, *b; + int i, bit_offset = cell->bit_offset; + + p = b = buf; + if (bit_offset) { + /* First shift */ + *b++ >>= bit_offset; + + /* setup rest of the bytes if any */ + for (i = 1; i < cell->bytes; i++) { + /* Get bits from next byte and shift them towards msb */ + *p |= *b << (BITS_PER_BYTE - bit_offset); + + p = b; + *b++ >>= bit_offset; + } + + /* result fits in less bytes */ + if (cell->bytes != DIV_ROUND_UP(cell->nbits, BITS_PER_BYTE)) + *p-- = 0; + } + /* clear msb bits if any leftover in the last byte */ + *p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0); +} + +static int __nvmem_cell_read(struct nvmem_device *nvmem, + struct nvmem_cell *cell, + void *buf, ssize_t *len) +{ + int rc; + + rc = regmap_raw_read(nvmem->regmap, cell->offset, buf, cell->bytes); + + if (IS_ERR_VALUE(rc)) + return rc; + + /* shift bits in-place */ + if (cell->bit_offset || cell->bit_offset) + nvmem_shift_read_buffer_in_place(cell, buf); + + *len = cell->bytes; + + return *len; +} +/** + * nvmem_cell_read(): Read a given nvmem cell + * + * @cell: nvmem cell to be read. + * @len: pointer to length of cell which will be populated on successful read. + * + * The return value will be an ERR_PTR() on error or a valid pointer + * to a char * bufffer. The buffer should be freed by the consumer with a + * kfree(). + */ +void *nvmem_cell_read(struct nvmem_cell *cell, ssize_t *len) +{ + struct nvmem_device *nvmem = cell->nvmem; + u8 *buf; + int rc; + + if (!nvmem || !nvmem->regmap) + return ERR_PTR(-EINVAL); + + buf = kzalloc(cell->bytes, GFP_KERNEL); + if (!buf) + return ERR_PTR(-ENOMEM); + + rc = __nvmem_cell_read(nvmem, cell, buf, len); + if (IS_ERR_VALUE(rc)) { + kfree(buf); + return ERR_PTR(rc); + } + + return buf; +} +EXPORT_SYMBOL_GPL(nvmem_cell_read); + +static inline void *nvmem_cell_prepare_write_buffer(struct nvmem_cell *cell, + u8 *_buf, int len) +{ + struct nvmem_device *nvmem = cell->nvmem; + int i, rc, nbits, bit_offset = cell->bit_offset; + u8 v, *p, *buf, *b, pbyte, pbits; + + nbits = cell->nbits; + buf = kzalloc(cell->bytes, GFP_KERNEL); + if (!buf) + return ERR_PTR(-ENOMEM); + + memcpy(buf, _buf, len); + p = b = buf; + + if (bit_offset) { + pbyte = *b; + *b <<= bit_offset; + + /* setup the first byte with lsb bits from nvmem */ + rc = regmap_raw_read(nvmem->regmap, cell->offset, &v, 1); + *b++ |= GENMASK(bit_offset - 1, 0) & v; + + /* setup rest of the byte if any */ + for (i = 1; i < cell->bytes; i++) { + /* Get last byte bits and shift them towards lsb */ + pbits = pbyte >> (BITS_PER_BYTE - 1 - bit_offset); + pbyte = *b; + p = b; + *b <<= bit_offset; + *b++ |= pbits; + } + } + + /* if it's not end on byte boundary */ + if ((nbits + bit_offset) % BITS_PER_BYTE) { + /* setup the last byte with msb bits from nvmem */ + rc = regmap_raw_read(nvmem->regmap, + cell->offset + cell->bytes - 1, &v, 1); + *p |= GENMASK(7, (nbits + bit_offset) % BITS_PER_BYTE) & v; + + } + + return buf; +} + +/** + * nvmem_cell_write(): Write to a given nvmem cell + * + * @cell: nvmem cell to be written. + * @buf: Buffer to be written. + * @len: length of buffer to be written to nvmem cell. + * + * The return value will be an length of bytes written or non zero on failure. + */ +int nvmem_cell_write(struct nvmem_cell *cell, void *buf, ssize_t len) +{ + struct nvmem_device *nvmem = cell->nvmem; + int rc; + void *wbuf = buf; + + if (!nvmem || !nvmem->regmap || nvmem->read_only || + (cell->bit_offset == 0 && len != cell->bytes)) + return -EINVAL; + + if (cell->bit_offset || cell->nbits) { + wbuf = nvmem_cell_prepare_write_buffer(cell, buf, len); + if (IS_ERR(wbuf)) + return PTR_ERR(wbuf); + } + + rc = regmap_raw_write(nvmem->regmap, cell->offset, wbuf, cell->bytes); + + /* free the tmp buffer */ + if (cell->bit_offset) + kfree(wbuf); + + if (IS_ERR_VALUE(rc)) + return rc; + + return len; +} +EXPORT_SYMBOL_GPL(nvmem_cell_write); + static int nvmem_init(void) { return class_register(&nvmem_class); diff --git a/include/linux/nvmem-consumer.h b/include/linux/nvmem-consumer.h new file mode 100644 index 0000000..c3fa8c7 --- /dev/null +++ b/include/linux/nvmem-consumer.h @@ -0,0 +1,49 @@ +/* + * nvmem framework consumer. + * + * Copyright (C) 2015 Srinivas Kandagatla + * Copyright (C) 2013 Maxime Ripard + * + * This file is licensed under the terms of the GNU General Public + * License version 2. This program is licensed "as is" without any + * warranty of any kind, whether express or implied. + */ + +#ifndef _LINUX_NVMEM_CONSUMER_H +#define _LINUX_NVMEM_CONSUMER_H + +/* consumer cookie */ +struct nvmem_cell; + +#if IS_ENABLED(CONFIG_NVMEM) + +/* Cell based interface */ +struct nvmem_cell *nvmem_cell_get(struct device *dev, const char *name); +void nvmem_cell_put(struct nvmem_cell *cell); +void *nvmem_cell_read(struct nvmem_cell *cell, ssize_t *len); +int nvmem_cell_write(struct nvmem_cell *cell, void *buf, ssize_t len); + +#else + +struct nvmem_cell *nvmem_cell_get(struct device *dev, const char *name) +{ + return ERR_PTR(-ENOSYS); +} + +static inline void nvmem_cell_put(struct nvmem_cell *cell) +{ +} + +static inline char *nvmem_cell_read(struct nvmem_cell *cell, ssize_t *len) +{ + return ERR_PTR(-ENOSYS); +} + +static inline int nvmem_cell_write(struct nvmem_cell *cell, + const char *buf, ssize_t len) +{ + return -ENOSYS; +} +#endif /* CONFIG_NVMEM */ + +#endif /* ifndef _LINUX_NVMEM_CONSUMER_H */