From patchwork Wed Aug 18 14:10:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F10F7C4320A for ; Wed, 18 Aug 2021 14:12:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D9091610A3 for ; Wed, 18 Aug 2021 14:12:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238939AbhHROMk (ORCPT ); Wed, 18 Aug 2021 10:12:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238812AbhHROMf (ORCPT ); Wed, 18 Aug 2021 10:12:35 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F36FC0613D9 for ; Wed, 18 Aug 2021 07:12:00 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id x4so2367163pgh.1 for ; Wed, 18 Aug 2021 07:12:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=VG7TmDpgCPtcx1YrqDRM9gY1pXuKyoZ818yYVJ4nhrU=; b=nM+Vh8/E769scay4OzXwi7apEZu99TO7DNd3rW/5TMnljCjGsmLpkQU3ke3I7AQALs Kz8omE3Fk75jqmK4+/lb+UOCRbHD2DgmJrk1bgDzHXRgeulWOi1OzZ4e7xV2vunngWsO 8rpOIa9Y8VEyZLa39R47/6VP2y4QnAcfQEuYJ7h3goD3QStLOShaSH6b3fukSBbVi/2e eg2CHJiNDPa062Ci9YegRUPi+qtNgIKn8BguVbvD+U9qRMfNFIxlTbyN1Ysih+Km7Qfl 3NO9pgWV5v+jZE3fXHutQLQz1WwL/qlaK6a6kzMAjLpQioJHtHyFeTOxf64MQMTyav6t OK+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=VG7TmDpgCPtcx1YrqDRM9gY1pXuKyoZ818yYVJ4nhrU=; b=Xpu1SoV/UxLrbEfZWuiZCHCf4l5RGW8F+eBBGG7RPhgG+uKgGrhxD6qIdO0lMNkyXd vOzhNqIZb/+RbDXgjZ6gdUioshZ6DQKGJ8BZXiJJ5Jrl6EbJfiIx9tNzpQKus7evO36P Ug35czycMJTxBsDEXMrHU0f6quwgyXFEmku+JluOeJV13c245G+z8mju+Hlu0WtrVHzn CyHQ0YYmbQ6il9CyE6IduJeWtwrWZMxmtswnlKR5LNkcTmYJFIC9casjnSlrh/tfNxBF 64ccueaGlstKrsgLwTKK+2xwk39xkZCOU8mquK1ZoomRO0FjbOYnBsO4R4xe5FxGHmXL ln7w== MIME-Version: 1.0 X-Gm-Message-State: AOAM531/caQzy1lBMZihkzAoioi640yXXghsv46eUiGxv7h0XM6uts/s sbtQQcjWVYS+CjgPVYj7/J2DaQC/vwApfsVltYG3G7vYgepD+CBXmgXXMZEAMMbKhKgK7MIWBF7 lQOphg4PP2DAsXi2o X-Google-Smtp-Source: ABdhPJxRifZsXPpwDXfEKH5wrEWQT/vWpPS9+PC3LwxRGxM4NGnej/7pWO+Byt2KwrPHu/WStDFJkQ== X-Received: by 2002:a63:6284:: with SMTP id w126mr9078866pgb.189.1629295919788; Wed, 18 Aug 2021 07:11:59 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.11.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:11:59 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 02/30] v4l: vxd-dec: Create mmu programming helper library Date: Wed, 18 Aug 2021 19:40:09 +0530 Message-Id: <20210818141037.19990-3-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya The IMG D5520 has an MMU which needs to be programmed with all memory which it needs access to. This includes input buffers, output buffers and parameter buffers for each decode instance, as well as common buffers for firmware, etc. Functions are provided for creating MMU directories (each stream will have it's own MMU context), retrieving the directory page, and mapping/unmapping a buffer into the MMU for a specific MMU context. Also helper(s) are provided for querying the capabilities of the MMU. Signed-off-by: Buddy Liong Signed-off-by: Angela Stegmaier Signed-off-by: Sidraya --- MAINTAINERS | 2 + drivers/staging/media/vxd/common/imgmmu.c | 782 ++++++++++++++++++++++ drivers/staging/media/vxd/common/imgmmu.h | 180 +++++ 3 files changed, 964 insertions(+) create mode 100644 drivers/staging/media/vxd/common/imgmmu.c create mode 100644 drivers/staging/media/vxd/common/imgmmu.h diff --git a/MAINTAINERS b/MAINTAINERS index 163b3176ccf9..2e921650a14c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19537,6 +19537,8 @@ M: Sidraya Jayagond L: linux-media@vger.kernel.org S: Maintained F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml +F: drivers/staging/media/vxd/common/imgmmu.c +F: drivers/staging/media/vxd/common/imgmmu.h VIDEO I2C POLLING DRIVER M: Matt Ranostay diff --git a/drivers/staging/media/vxd/common/imgmmu.c b/drivers/staging/media/vxd/common/imgmmu.c new file mode 100644 index 000000000000..ce2f41f72485 --- /dev/null +++ b/drivers/staging/media/vxd/common/imgmmu.c @@ -0,0 +1,782 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * IMG DEC MMU function implementations + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Angela Stegmaier + * + * Re-written for upstreamimg + * Sidraya Jayagond + */ + +#include +#include +#include +#include +#include "img_mem_man.h" +#include "imgmmu.h" + +/** + * struct mmu_directory - the MMU directory information + * @dir_page: pointer to the mmu_page_cfg_table (physical table used) which + * this mmu_directory belongs to + * @dir_page_table: All the page table structures in a static array of pointers + * @mmu_info_cfg: Functions to use to manage pages allocation, liberation and + * writing + * @num_mapping: number of mapping using this directory + */ +struct mmu_directory { + struct mmu_page_cfg *dir_page; + struct mmu_page_cfg_table **dir_page_table; + struct mmu_info mmu_info_cfg; + unsigned int num_mapping; +}; + +/* + * struct mmu_map - the MMU mapping information + * @mmu_dir: pointer to the mmu_directory which this mmu_map belongs to + * @dev_virt_addr: device virtual address root associated with this mapping + * @used_flag: flag used when allocating + * @n_entries: number of entries mapped + */ +struct mmu_map { + struct mmu_directory *mmu_dir; + struct mmu_heap_alloc dev_virt_addr; + unsigned int used_flag; + unsigned int n_entries; +}; + +/* + * struct mmu_page_cfg_table - the MMU page table information. + * One page table of the directory. + * @mmu_dir: pointer to the mmu_directory which this mmu_page_cfg_table + * belongs to + * @page: page used to store this mapping in the MMU + * @valid_entries: number of valid entries in this page + */ +struct mmu_page_cfg_table { + struct mmu_directory *mmu_dir; + struct mmu_page_cfg *page; + unsigned int valid_entries; +}; + +/* + * mmu_pgt_destroy() - Destruction of a page table (does not follow the + * child pointer) + * @pgt: pointer to the MMU page table information + * + * Warning: Does not verify if pages are still valid or not + */ +static void mmu_pgt_destroy(struct mmu_page_cfg_table *pgt) +{ + if (!pgt->mmu_dir || + !pgt->mmu_dir->mmu_info_cfg.pfn_page_free || + !pgt->page) { + return; + } + + pr_debug("%s:%d Destroy page table (phys addr %llu)\n", + __func__, __LINE__, pgt->page->phys_addr); + + pgt->mmu_dir->mmu_info_cfg.pfn_page_free(pgt->page); + pgt->page = NULL; + + kfree(pgt); +} + +/* + * mmu_dir_entry() - Extact the directory index from a virtual address + * @vaddr: virtual address + */ +static inline unsigned int mmu_dir_entry(unsigned long vaddr) +{ + return (unsigned int)((vaddr & VIRT_DIR_IDX_MASK) >> MMU_DIR_SHIFT); +} + +/* + * mmu_pg_entry() - Extract the page table index from a virtual address + * @vaddr: virtual address + */ +static inline unsigned int mmu_pg_entry(unsigned long vaddr) +{ + return (unsigned int)((vaddr & VIRT_PAGE_TBL_MASK) >> MMU_PAGE_SHIFT); +} + +/* + * mmu_pg_wr() - Default function used when a mmu_info structure has an empty + * pfn_page_write pointer + * @mmu_page: pointer to the mmu_page to update + * @offset: offset into the directory + * @pa_to_write: physical address value to add to the entr + * @mmu_flag: mmu flag(s) to set + */ +static void mmu_pg_wr(struct mmu_page_cfg *mmu_page, unsigned int offset, + unsigned long long pa_to_write, unsigned int mmu_flag) +{ + unsigned int *dir_mem = NULL; + unsigned long long cur_pa = pa_to_write; + + if (!mmu_page) + return; + + dir_mem = (unsigned int *)mmu_page->cpu_virt_addr; + /* + * assumes that the MMU HW has the extra-bits enabled (this default + * function has no way of knowing) + */ + if ((MMU_PHYS_SIZE - MMU_VIRT_SIZE) > 0) + cur_pa >>= (MMU_PHYS_SIZE - MMU_VIRT_SIZE); + /* + * The MMU_PAGE_SHIFT bottom bits should be masked because page + * allocation. + * MMU_PAGE_SHIFT-(MMU_PHYS_SIZE-MMU_VIRT_SIZE) are used for + * flags so it's ok + */ + dir_mem[offset] = (unsigned int)cur_pa | (mmu_flag); +} + +/* + * mmu_page_cfg_table() - Create a page table + * @mmu_dir: pointer to the mmu_directory in which to create the new page table + * structure + * + * Return: A pointer to the new page table structure in case of success. + * (void *) in case of error + */ +static struct mmu_page_cfg_table *mmu_pgt_create(struct mmu_directory *mmu_dir) +{ + struct mmu_page_cfg_table *neo = NULL; + unsigned int i; + + if (!mmu_dir || !mmu_dir->mmu_info_cfg.pfn_page_alloc || + !mmu_dir->mmu_info_cfg.pfn_page_write) + return (void *)(-EINVAL); + + neo = kmalloc(sizeof(*neo), GFP_KERNEL); + if (!neo) + return (void *)(-ENOMEM); + + neo->mmu_dir = mmu_dir; + + neo->page = + mmu_dir->mmu_info_cfg.pfn_page_alloc(mmu_dir->mmu_info_cfg.alloc_ctx); + if (!neo->page) { + pr_err("%s:%d failed to allocate Page Table physical page\n", + __func__, __LINE__); + kfree(neo); + return (void *)(-ENOMEM); + } + pr_debug("%s:%d Create page table (phys addr 0x%llx CPU Virt 0x%lx)\n", + __func__, __LINE__, neo->page->phys_addr, + neo->page->cpu_virt_addr); + + /* invalidate all pages */ + for (i = 0; i < MMU_N_PAGE; i++) { + mmu_dir->mmu_info_cfg.pfn_page_write(neo->page, i, 0, + MMU_FLAG_INVALID); + } + + /* + * When non-UMA need to update the device memory after setting + * it to 0 + */ + if (mmu_dir->mmu_info_cfg.pfn_page_update) + mmu_dir->mmu_info_cfg.pfn_page_update(neo->page); + + return neo; +} + +/* + * mmu_create_directory - Create a directory entry based on a given directory + * configuration + * @mmu_info_ops: contains the functions to use to manage page table memory. + * Is copied and not modified. + * + * @warning Obviously creation of the directory allocates memory - do not call + * while interrupts are disabled + * + * @return The opaque handle to the mmu_directory object and result to 0 + * @return (void *) in case of an error and result has the value: + * @li -EINVAL if mmu_info configuration is NULL or does not + * contain function pointers + * @li -ENOMEM if an internal allocation failed + * @li -ENOMEM if the given mmu_pfn_page_alloc returned NULL + */ +struct mmu_directory *mmu_create_directory(const struct mmu_info *mmu_info_ops) +{ + struct mmu_directory *neo = NULL; + unsigned int i; + + /* + * invalid information in the directory config: + * - invalid page allocator and dealloc (page write can be NULL) + * - invalid virtual address representation + * - invalid page size + * - invalid MMU size + */ + if (!mmu_info_ops || !mmu_info_ops->pfn_page_alloc || !mmu_info_ops->pfn_page_free) { + pr_err("%s:%d invalid MMU configuration\n", __func__, __LINE__); + return (void *)(-EINVAL); + } + + neo = kzalloc(sizeof(*neo), GFP_KERNEL); + if (!neo) + return (void *)(-ENOMEM); + + neo->dir_page_table = kcalloc(MMU_N_TABLE, sizeof(struct mmu_page_cfg_table *), + GFP_KERNEL); + if (!neo->dir_page_table) { + kfree(neo); + return (void *)(-ENOMEM); + } + + memcpy(&neo->mmu_info_cfg, mmu_info_ops, sizeof(struct mmu_info)); + if (!mmu_info_ops->pfn_page_write) { + pr_debug("%s:%d using default MMU write\n", __func__, __LINE__); + /* use internal function */ + neo->mmu_info_cfg.pfn_page_write = &mmu_pg_wr; + } + + neo->dir_page = mmu_info_ops->pfn_page_alloc(mmu_info_ops->alloc_ctx); + if (!neo->dir_page) { + kfree(neo->dir_page_table); + kfree(neo); + return (void *)(-ENOMEM); + } + + pr_debug("%s:%d (phys page 0x%llx; CPU virt 0x%lx)\n", __func__, + __LINE__, neo->dir_page->phys_addr, + neo->dir_page->cpu_virt_addr); + /* now we have a valid mmu_directory structure */ + + /* invalidate all entries */ + for (i = 0; i < MMU_N_TABLE; i++) { + neo->mmu_info_cfg.pfn_page_write(neo->dir_page, i, 0, + MMU_FLAG_INVALID); + } + + /* when non-UMA need to update the device memory */ + if (neo->mmu_info_cfg.pfn_page_update) + neo->mmu_info_cfg.pfn_page_update(neo->dir_page); + + return neo; +} + +/* + * mmu_destroy_directory - Destroy the mmu_directory - assumes that the HW is + * not going to access the memory any-more + * @mmu_dir: pointer to the mmu directory to destroy + * + * Does not invalidate any memory because it assumes that everything is not + * used any-more + */ +int mmu_destroy_directory(struct mmu_directory *mmu_dir) +{ + unsigned int i; + + if (!mmu_dir) { + /* could be an assert */ + pr_err("%s:%d mmu_dir is NULL\n", __func__, __LINE__); + return -EINVAL; + } + + if (mmu_dir->num_mapping > 0) + /* mappings should have been destroyed! */ + pr_err("%s:%d directory still has %u mapping attached to it\n", + __func__, __LINE__, mmu_dir->num_mapping); + /* + * not exiting because clearing the page table map is more + * important than losing a few structures + */ + + if (!mmu_dir->mmu_info_cfg.pfn_page_free || !mmu_dir->dir_page_table) + return -EINVAL; + + pr_debug("%s:%d destroy MMU dir (phys page 0x%llx)\n", + __func__, __LINE__, mmu_dir->dir_page->phys_addr); + + /* first we destroy the directory entry */ + mmu_dir->mmu_info_cfg.pfn_page_free(mmu_dir->dir_page); + mmu_dir->dir_page = NULL; + + /* destroy every mapping that still exists */ + for (i = 0; i < MMU_N_TABLE; i++) { + if (mmu_dir->dir_page_table[i]) { + mmu_pgt_destroy(mmu_dir->dir_page_table[i]); + mmu_dir->dir_page_table[i] = NULL; + } + } + + kfree(mmu_dir->dir_page_table); + kfree(mmu_dir); + return 0; +} + +/* + * mmu_directory_get_page - Get access to the page table structure used in the + * directory (to be able to write it to registers) + * @mmu_dir: pointer to the mmu directory. asserts if mmu_dir is NULL + * + * @return the page table structure used + */ +struct mmu_page_cfg *mmu_directory_get_page(struct mmu_directory *mmu_dir) +{ + if (!mmu_dir) + return NULL; + + return mmu_dir->dir_page; +} + +static struct mmu_map *mmu_directory_map(struct mmu_directory *mmu_dir, + const struct mmu_heap_alloc *dev_va, + unsigned int ui_map_flags, + int (*phys_iter_next)(void *arg, + unsigned long long *next), + void *phys_iter_arg) +{ + unsigned int first_dir = 0; + unsigned int first_pg = 0; + unsigned int dir_off = 0; + unsigned int pg_off = 0; + unsigned int n_entries = 0; + unsigned int i; + unsigned int d; + const unsigned int duplicate = PAGE_SIZE / mmu_get_page_size(); + int res = 0; + struct mmu_map *neo = NULL; + struct mmu_page_cfg_table **dir_pgtbl = NULL; + + /* + * in non UMA updates on pages needs to be done - store index of + * directory entry pages to update + */ + unsigned int *to_update; + /* + * number of pages in to_update (will be at least 1 for the first_pg to + * update) + */ + unsigned int n_pgs_to_update = 0; + /* + * to know if we also need to update the directory page (creation of new + * page) + */ + unsigned char dir_modified = FALSE; + + if (!mmu_dir || !dev_va || duplicate < 1) + return (void *)(-EINVAL); + + dir_pgtbl = mmu_dir->dir_page_table; + + n_entries = dev_va->alloc_size / PAGE_SIZE; + if (dev_va->alloc_size % MMU_PAGE_SIZE != 0 || n_entries == 0) { + pr_err("%s:%d invalid allocation size\n", __func__, __LINE__); + return (void *)(-EINVAL); + } + + if ((ui_map_flags & MMU_FLAG_VALID) != 0) { + pr_err("%s:%d valid flag (0x%x) is set in the falgs 0x%x\n", + __func__, __LINE__, MMU_FLAG_VALID, ui_map_flags); + return (void *)(-EINVAL); + } + + /* + * has to be dynamically allocated because it is bigger than 1k (max + * stack in the kernel) + * MMU_N_TABLE is 1024 for 4096B pages, that's a 4k allocation (1 page) + * - if it gets bigger may IMG_BIGALLOC should be used + */ + to_update = kcalloc(MMU_N_TABLE, sizeof(unsigned int), GFP_KERNEL); + if (!to_update) + return (void *)(-ENOMEM); + + /* manage multiple page table mapping */ + + first_dir = mmu_dir_entry(dev_va->virt_addr); + first_pg = mmu_pg_entry(dev_va->virt_addr); + + if (first_dir >= MMU_N_TABLE || first_pg >= MMU_N_PAGE) { + kfree(to_update); + return (void *)(-EINVAL); + } + + /* verify that the pages that should be used are available */ + dir_off = first_dir; + pg_off = first_pg; + + /* + * loop over the number of entries given by CPU allocator but CPU page + * size can be > than MMU page size therefore it may need to "duplicate" + * entries by creating a fake physical address + */ + for (i = 0; i < n_entries * duplicate; i++) { + if (pg_off >= MMU_N_PAGE) { + dir_off++; /* move to next directory */ + if (dir_off >= MMU_N_TABLE) { + res = -EINVAL; + break; + } + pg_off = 0; /* using its first page */ + } + + /* + * if dir_pgtbl[dir_off] == NULL not yet + * allocated it means all entries are available + */ + if (dir_pgtbl[dir_off]) { + /* + * inside a pagetable - verify that the required offset + * is invalid + */ + struct mmu_page_cfg_table *tbl = dir_pgtbl[dir_off]; + unsigned int *page_mem = (unsigned int *)tbl->page->cpu_virt_addr; + + if ((page_mem[pg_off] & MMU_FLAG_VALID) != 0) { + pr_err("%s:%d one of the required page is currently in use\n", + __func__, __LINE__); + res = -EPERM; + break; + } + } + /* PageTable struct exists */ + pg_off++; + } /* for all needed entries */ + + /* it means one entry was not invalid or not enough page were given */ + if (res != 0) { + /* + * message already printed + * IMG_ERROR_MEMORY_IN_USE when an entry is not invalid + * IMG_ERROR_INVALID_PARAMETERS when not enough pages are given + * (or too much) + */ + kfree(to_update); + return (void *)(unsigned long)(res); + } + + neo = kmalloc(sizeof(*neo), GFP_KERNEL); + if (!neo) { + kfree(to_update); + return (void *)(-ENOMEM); + } + neo->mmu_dir = mmu_dir; + neo->dev_virt_addr = *dev_va; + memcpy(&neo->dev_virt_addr, dev_va, sizeof(struct mmu_heap_alloc)); + neo->used_flag = ui_map_flags; + + /* we now know that all pages are available */ + dir_off = first_dir; + pg_off = first_pg; + + to_update[n_pgs_to_update] = first_dir; + n_pgs_to_update++; + + for (i = 0; i < n_entries; i++) { + unsigned long long cur_phys_addr; + + if (phys_iter_next(phys_iter_arg, &cur_phys_addr) != 0) { + pr_err("%s:%d not enough entries in physical address array\n", + __func__, __LINE__); + kfree(neo); + kfree(to_update); + return (void *)(-EBUSY); + } + for (d = 0; d < duplicate; d++) { + if (pg_off >= MMU_N_PAGE) { + /* move to next directory */ + dir_off++; + /* using its first page */ + pg_off = 0; + + to_update[n_pgs_to_update] = dir_off; + n_pgs_to_update++; + } + + /* this page table object does not exists, create it */ + if (!dir_pgtbl[dir_off]) { + dir_pgtbl[dir_off] = mmu_pgt_create(mmu_dir); + if (IS_ERR_VALUE((unsigned long)dir_pgtbl[dir_off])) { + dir_pgtbl[dir_off] = NULL; + goto cleanup_fail; + } + /* + * make this page table valid + * should be dir_off + */ + mmu_dir->mmu_info_cfg.pfn_page_write(mmu_dir->dir_page, + dir_off, + dir_pgtbl[dir_off]->page->phys_addr, + MMU_FLAG_VALID); + dir_modified = TRUE; + } + + /* + * map this particular page in the page table + * use d*(MMU page size) to add additional entries from + * the given physical address with the correct offset + * for the MMU + */ + mmu_dir->mmu_info_cfg.pfn_page_write(dir_pgtbl[dir_off]->page, + pg_off, + cur_phys_addr + d * + mmu_get_page_size(), + neo->used_flag | + MMU_FLAG_VALID); + dir_pgtbl[dir_off]->valid_entries++; + + pg_off++; + } /* for duplicate */ + } /* for entries */ + + neo->n_entries = n_entries * duplicate; + /* one more mapping is related to this directory */ + mmu_dir->num_mapping++; + + /* if non UMA we need to update device memory */ + if (mmu_dir->mmu_info_cfg.pfn_page_update) { + while (n_pgs_to_update > 0) { + unsigned int idx = to_update[n_pgs_to_update - 1]; + struct mmu_page_cfg_table *tbl = dir_pgtbl[idx]; + + mmu_dir->mmu_info_cfg.pfn_page_update(tbl->page); + n_pgs_to_update--; + } + if (dir_modified) + mmu_dir->mmu_info_cfg.pfn_page_update(mmu_dir->dir_page); + } + + kfree(to_update); + return neo; + +cleanup_fail: + pr_err("%s:%d failed to create a non-existing page table\n", __func__, __LINE__); + + /* + * invalidate all already mapped pages - + * do not destroy the created pages + */ + while (i > 1) { + if (d == 0) { + i--; + d = duplicate; + } + d--; + + if (pg_off == 0) { + pg_off = MMU_N_PAGE; + if (!dir_off) + continue; + dir_off--; + } + + pg_off--; + + /* it should have been used before */ + if (!dir_pgtbl[dir_off]) + continue; + + mmu_dir->mmu_info_cfg.pfn_page_write(dir_pgtbl[dir_off]->page, + pg_off, 0, + MMU_FLAG_INVALID); + dir_pgtbl[dir_off]->valid_entries--; + } + + kfree(neo); + kfree(to_update); + return (void *)(-ENOMEM); +} + +/* + * with sg + */ +struct sg_phys_iter { + void *sgl; + unsigned int offset; +}; + +static int sg_phys_iter_next(void *arg, unsigned long long *next) +{ + struct sg_phys_iter *iter = arg; + + if (!iter->sgl) + return -ENOENT; + + *next = sg_phys(iter->sgl) + iter->offset; /* phys_addr to dma_addr? */ + iter->offset += PAGE_SIZE; + + if (iter->offset == img_mmu_get_sgl_length(iter->sgl)) { + iter->sgl = sg_next(iter->sgl); + iter->offset = 0; + } + + return 0; +} + +/* + * mmu_directory_map_sg - Create a page table mapping for a list of physical + * pages and device virtual address + * + * @mmu_dir: directory to use for the mapping + * @phys_page_sg: sorted array of physical addresses (ascending order). The + * number of elements is dev_va->alloc_size/MMU_PAGE_SIZE + * @note This array can potentially be big, the caller may need to use vmalloc + * if running the linux kernel (e.g. mapping a 1080p NV12 is 760 entries, 6080 + * Bytes - 2 CPU pages needed, fine with kmalloc; 4k NV12 is 3038 entries, + * 24304 Bytes - 6 CPU pages needed, kmalloc would try to find 8 contiguous + * pages which may be problematic if memory is fragmented) + * @dev_va: associated device virtual address. Given structure is copied + * @map_flag: flags to apply on the page (typically 0x2 for Write Only, + * 0x4 for Read Only) - the flag should not set bit 1 as 0x1 is the + * valid flag. + * + * @warning Mapping can cause memory allocation (missing pages) - do not call + * while interrupts are disabled + * + * @return The opaque handle to the mmu_map object and result to 0 + * @return (void *) in case of an error with the following values: + * @li -EINVAL if the allocation size is not a multiple of MMU_PAGE_SIZE, + * if the given list of page table is too long or not long enough for the + * mapping or if the give flags set the invalid bit + * @li -EPERM if the virtual memory is already mapped + * @li -ENOMEM if an internal allocation failed + * @li -ENOMEM if a page creation failed + */ +struct mmu_map *mmu_directory_map_sg(struct mmu_directory *mmu_dir, + void *phys_page_sg, + const struct mmu_heap_alloc *dev_va, + unsigned int map_flag) +{ + struct sg_phys_iter arg = { phys_page_sg }; + + return mmu_directory_map(mmu_dir, dev_va, map_flag, + sg_phys_iter_next, &arg); +} + +/* + * mmu_directory_unmap - Un-map the mapped pages (invalidate their entries) and + * destroy the mapping object + * @map: pointer to the pages to un-map + * + * This does not destroy the created Page Table (even if they are becoming + * un-used) and does not change the Directory valid bits. + * + * @return 0 + */ +int mmu_directory_unmap(struct mmu_map *map) +{ + unsigned int first_dir = 0; + unsigned int first_pg = 0; + unsigned int dir_offset = 0; + unsigned int pg_offset = 0; + unsigned int i; + struct mmu_directory *mmu_dir = NULL; + + /* + * in non UMA updates on pages needs to be done - store index of + * directory entry pages to update + */ + unsigned int *to_update; + unsigned int n_pgs_to_update = 0; + + if (!map || map->n_entries <= 0 || !map->mmu_dir) + return -EINVAL; + + mmu_dir = map->mmu_dir; + + /* + * has to be dynamically allocated because it is bigger than 1k (max + * stack in the kernel) + */ + to_update = kcalloc(MMU_N_TABLE, sizeof(unsigned int), GFP_KERNEL); + if (!to_update) + return -ENOMEM; + + first_dir = mmu_dir_entry(map->dev_virt_addr.virt_addr); + first_pg = mmu_pg_entry(map->dev_virt_addr.virt_addr); + + /* verify that the pages that should be used are available */ + dir_offset = first_dir; + pg_offset = first_pg; + + to_update[n_pgs_to_update] = first_dir; + n_pgs_to_update++; + + for (i = 0; i < map->n_entries; i++) { + if (pg_offset >= MMU_N_PAGE) { + /* move to next directory */ + dir_offset++; + /* using its first page */ + pg_offset = 0; + + to_update[n_pgs_to_update] = dir_offset; + n_pgs_to_update++; + } + + /* + * this page table object does not exist, something destroyed + * it while the mapping was supposed to use it + */ + if (mmu_dir->dir_page_table[dir_offset]) { + mmu_dir->mmu_info_cfg.pfn_page_write + (mmu_dir->dir_page_table[dir_offset]->page, + pg_offset, 0, + MMU_FLAG_INVALID); + mmu_dir->dir_page_table[dir_offset]->valid_entries--; + } + + pg_offset++; + } + + mmu_dir->num_mapping--; + + if (mmu_dir->mmu_info_cfg.pfn_page_update) + while (n_pgs_to_update > 0) { + unsigned int idx = to_update[n_pgs_to_update - 1]; + struct mmu_page_cfg_table *tbl = mmu_dir->dir_page_table[idx]; + + mmu_dir->mmu_info_cfg.pfn_page_update(tbl->page); + n_pgs_to_update--; + } + + /* mapping does not own the given virtual address */ + kfree(map); + kfree(to_update); + return 0; +} + +unsigned int mmu_directory_get_pagetable_entry(struct mmu_directory *mmu_dir, + unsigned long dev_virt_addr) +{ + unsigned int dir_entry = 0; + unsigned int table_entry = 0; + struct mmu_page_cfg_table *tbl; + struct mmu_page_cfg_table **dir_pgtbl = NULL; + unsigned int *page_mem; + + if (!mmu_dir) { + pr_err("mmu directory table is NULL\n"); + return 0xFFFFFF; + } + + dir_pgtbl = mmu_dir->dir_page_table; + + dir_entry = mmu_dir_entry(dev_virt_addr); + table_entry = mmu_pg_entry(dev_virt_addr); + + tbl = dir_pgtbl[dir_entry]; + if (!tbl) { + pr_err("page table entry is NULL\n"); + return 0xFFFFFF; + } + + page_mem = (unsigned int *)tbl->page->cpu_virt_addr; + +#if defined(DEBUG_DECODER_DRIVER) || defined(DEBUG_ENCODER_DRIVER) + pr_info("Page table value@dir_entry:table_entry[%d : %d] = %x\n", + dir_entry, table_entry, page_mem[table_entry]); +#endif + + return page_mem[table_entry]; +} diff --git a/drivers/staging/media/vxd/common/imgmmu.h b/drivers/staging/media/vxd/common/imgmmu.h new file mode 100644 index 000000000000..b35256d09e24 --- /dev/null +++ b/drivers/staging/media/vxd/common/imgmmu.h @@ -0,0 +1,180 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * IMG DEC MMU Library + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Angela Stegmaier + * + * Re-written for upstreamimg + * Sidraya Jayagond + */ + +#ifndef IMG_DEC_MMU_MMU_H +#define IMG_DEC_MMU_MMU_H + +#include + +#ifndef MMU_PHYS_SIZE +/* @brief MMU physical address size in bits */ +#define MMU_PHYS_SIZE 40 +#endif + +#ifndef MMU_VIRT_SIZE +/* @brief MMU virtual address size in bits */ +#define MMU_VIRT_SIZE 32 +#endif + +#ifndef MMU_PAGE_SIZE +/* @brief Page size in bytes */ +#define MMU_PAGE_SIZE 4096u +#define MMU_PAGE_SHIFT 12 +#define MMU_DIR_SHIFT 22 +#endif + +#if MMU_VIRT_SIZE == 32 +/* @brief max number of pagetable that can be stored in the directory entry */ +#define MMU_N_TABLE (MMU_PAGE_SIZE / 4u) +/* @brief max number of page mapping in the pagetable */ +#define MMU_N_PAGE (MMU_PAGE_SIZE / 4u) +#endif + +/* @brief Memory flag used to mark a page mapping as invalid */ +#define MMU_FLAG_VALID 0x1 +#define MMU_FLAG_INVALID 0x0 + +/* + * This type defines MMU variant. + */ +enum mmu_etype { + MMU_TYPE_NONE = 0, + MMU_TYPE_32BIT, + MMU_TYPE_36BIT, + MMU_TYPE_40BIT, + MMU_TYPE_FORCE32BITS = 0x7FFFFFFFU +}; + +/* @brief Page offset mask in virtual address - bottom bits */ +static const unsigned long VIRT_PAGE_OFF_MASK = ((1 << MMU_PAGE_SHIFT) - 1); +/* @brief Page table index mask in virtual address - middle bits */ +static const unsigned long VIRT_PAGE_TBL_MASK = + (((1 << MMU_DIR_SHIFT) - 1) & ~(((1 << MMU_PAGE_SHIFT) - 1))); +/* @brief Directory index mask in virtual address - high bits */ +static const unsigned long VIRT_DIR_IDX_MASK = (~((1 << MMU_DIR_SHIFT) - 1)); + +/* + * struct mmu_heap_alloc - information about a virtual mem heap allocation + * @virt_addr: pointer to start of the allocation + * @alloc_size: size in bytes + */ +struct mmu_heap_alloc { + unsigned long virt_addr; + unsigned long alloc_size; +}; + +/* + * struct mmu_page_cfg - mmu_page configuration + * @phys_addr: physical address - unsigned long long is used to support extended physical + * address on 32bit system + * @cpu_virt_addr: CPU virtual address pointer + */ +struct mmu_page_cfg { + unsigned long long phys_addr; + unsigned long cpu_virt_addr; +}; + +/* + * typedef mmu_pfn_page_alloc - page table allocation function + * + * Pointer to a function implemented by the used allocator to create 1 + * page table (used for the MMU mapping - directory page and mapping page) + * + * Return: + * * A populated mmu_page_cfg structure with the result of the page alloc. + * * NULL if the allocation failed. + */ +typedef struct mmu_page_cfg *(*mmu_pfn_page_alloc) (void *); + +/* + * typedef mmu_pfn_page_free + * @arg1: pointer to the mmu_page_cfg that is allocated using mmu_pfn_page_alloc + * + * Pointer to a function to free the allocated page table used for MMU mapping. + * + * @return void + */ +typedef void (*mmu_pfn_page_free) (struct mmu_page_cfg *arg1); + +/* + * typedef mmu_pfn_page_update + * @arg1: pointer to the mmu_page_cfg that is allocated using mmu_pfn_page_alloc + * + * Pointer to a function to update Device memory on non Unified Memory + * + * @return void + */ +typedef void (*mmu_pfn_page_update) (struct mmu_page_cfg *arg1); + +/* + * typedef mmu_pfn_page_write + * @mmu_page: mmu_page mmu page configuration to be written + * @offset: offset in entries (32b word) + * @pa_to_write: pa_to_write physical address to write + * @flags: flags bottom part of the entry used as flags for the MMU (including + * valid flag) + * + * Pointer to a function to write to a device address + * + * @return void + */ +typedef void (*mmu_pfn_page_write) (struct mmu_page_cfg *mmu_page, + unsigned int offset, + unsigned long long pa_to_write, unsigned int flags); + +/* + * struct mmu_info + * @pfn_page_alloc: function pointer for allocating a physical page used in + * MMU mapping + * @alloc_ctx: allocation context handler + * @pfn_page_free: function pointer for freeing a physical page used in + * MMU mapping + * @pfn_page_write: function pointer to write a physical address onto a page. + * If NULL, then internal function is used. Internal function + * assumes that MMU_PHYS_SIZE is the MMU size. + * @pfn_page_update: function pointer to update a physical page on device if + * non UMA. + */ +struct mmu_info { + mmu_pfn_page_alloc pfn_page_alloc; + void *alloc_ctx; + mmu_pfn_page_free pfn_page_free; + mmu_pfn_page_write pfn_page_write; + mmu_pfn_page_update pfn_page_update; +}; + +/* + * mmu_get_page_size() - Access the compilation specified page size of the + * MMU (in Bytes) + */ +static inline unsigned long mmu_get_page_size(void) +{ + return MMU_PAGE_SIZE; +} + +struct mmu_directory *mmu_create_directory(const struct mmu_info *mmu_info_ops); +int mmu_destroy_directory(struct mmu_directory *mmu_dir); + +struct mmu_page_cfg *mmu_directory_get_page(struct mmu_directory *mmu_dir); + +struct mmu_map *mmu_directory_map_sg(struct mmu_directory *mmu_dir, + void *phys_page_sg, + const struct mmu_heap_alloc *dev_va, + unsigned int map_flag); +int mmu_directory_unmap(struct mmu_map *map); + +unsigned int mmu_directory_get_pagetable_entry(struct mmu_directory *mmu_dir, + unsigned long dev_virt_addr); + +#endif /* IMG_DEC_MMU_MMU_H */ From patchwork Wed Aug 18 14:10:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00484C4338F for ; Wed, 18 Aug 2021 14:12:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CE57A610A7 for ; Wed, 18 Aug 2021 14:12:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239062AbhHROMt (ORCPT ); Wed, 18 Aug 2021 10:12:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238812AbhHROMm (ORCPT ); Wed, 18 Aug 2021 10:12:42 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85D69C0613CF for ; Wed, 18 Aug 2021 07:12:07 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id g14so2261838pfm.1 for ; Wed, 18 Aug 2021 07:12:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bSxbKWulFFpyjBcPxiz8ABeCm/CvZUpjssP2kGx4wOk=; b=gqwBHC8M2rQ8BCh+4CR3JV8mnBPpY0enzBtsg16a+OSI2t9o08SYWbkcfNLOqj1X8y yXfzUA0MRmqIk26QotJKL76ePteffiECzQIfDUc5aiPW1Z8QLppF2APg9cFtrE+p686Z hS4Y71WX5cK2RYuapWWExUFQgdb/kLaojTzOVprD3qW+nk20+a7ybj3CR9HvHvvgPaeM uPiDt18j+STM/fOukyZF6VLybNRjFcLCkkrBCjR5yPI9AjUPEnsMNGouNUFuObT0QUJk 2VaIkdNzTaNa7WPqaHOcfwT41iSnWvg9jN8DpEEsgDakzFC/rGGYnBy+z26WxIKQVzmr uJLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=bSxbKWulFFpyjBcPxiz8ABeCm/CvZUpjssP2kGx4wOk=; b=fY4XloeS5b0XIoXmv135zLPwPJ+7HLVj230kLn+bK12tX/XonnAIhu343gmdcX75C7 JeJfJ1cWyEKrYtv6k94cKtG/WcOaz4VjBzWk/VlYl9gdx2qF5ydmdDNqxSgz2HllZIOX fzWCcyYgtu1lq/85g0lAWRL/HaNETvSxJ37cTcOa21911oih39AgKxtn7j/0Aud0yOFZ ZAfDQHZ25rAZ8uHSDgIn1Rfx345142WwRWrXN43/sJQRJHEwAolv5H86HIVYJtZrqN+9 24RuDhU7HIxH5PwJWJ3N8Y3L/HQskUkTyhQaVoqdyBSfIYiCMJkyUCeZ+TJjpn30DUuX /piA== MIME-Version: 1.0 X-Gm-Message-State: AOAM53389oDzddZkQwiDDkxcCXd7Ceuram50Cd/fhxJYYTO7TacE6tmp 9SrThwFQwwO7vNkFV68Q5lr9PjYP9y/XGnaI+kM2hQBt1ttueottawOwBtkfhDbjaBoBO9l+8JG XUibI/4+zWbrXTfG1 X-Google-Smtp-Source: ABdhPJz/x72SxvT+k2GtWyP8h08VHNtMDs0gbY5gXECHswI+FegIijEKpidKtARvvubTwWiqDayNrA== X-Received: by 2002:a65:6894:: with SMTP id e20mr8928100pgt.419.1629295926463; Wed, 18 Aug 2021 07:12:06 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.12.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:12:05 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 04/30] v4l: vxd-dec: Add vxd helper library Date: Wed, 18 Aug 2021 19:40:11 +0530 Message-Id: <20210818141037.19990-5-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya The vxd helper provides the functionality for firmware blob preparation and loading, power management (core reset, etc.), firmware messaging, interrupt handling, managing the hardware status, and error handling. The vxd helper also interacts with the memory manager helper to create a context for each stream and associate it with the mmu context. The common mappings are done during this creation for the firmware and rendec buffers. Signed-off-by: Buddy Liong Signed-off-by: Angela Stegmaier Signed-off-by: Sidraya --- MAINTAINERS | 4 + .../media/vxd/decoder/img_dec_common.h | 278 +++ drivers/staging/media/vxd/decoder/vxd_pvdec.c | 1745 +++++++++++++++++ .../media/vxd/decoder/vxd_pvdec_priv.h | 126 ++ .../media/vxd/decoder/vxd_pvdec_regs.h | 779 ++++++++ 5 files changed, 2932 insertions(+) create mode 100644 drivers/staging/media/vxd/decoder/img_dec_common.h create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec.c create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h create mode 100644 drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h diff --git a/MAINTAINERS b/MAINTAINERS index 150272927839..0f8154b69a91 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19542,6 +19542,10 @@ F: drivers/staging/media/vxd/common/img_mem_man.h F: drivers/staging/media/vxd/common/img_mem_unified.c F: drivers/staging/media/vxd/common/imgmmu.c F: drivers/staging/media/vxd/common/imgmmu.h +F: drivers/staging/media/vxd/decoder/img_dec_common.h +F: drivers/staging/media/vxd/decoder/vxd_pvdec.c +F: drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h +F: drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h VIDEO I2C POLLING DRIVER M: Matt Ranostay diff --git a/drivers/staging/media/vxd/decoder/img_dec_common.h b/drivers/staging/media/vxd/decoder/img_dec_common.h new file mode 100644 index 000000000000..7bb3bd6d6e78 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/img_dec_common.h @@ -0,0 +1,278 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * IMG DEC common header + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 exas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + * Prashanth Kumar Amai + */ + +#ifndef _IMG_DEC_COMMON_H +#define _IMG_DEC_COMMON_H + +#include + +#define VXD_MAX_PIPES 2 +#define MAX_DST_BUFFERS 32 + +/* Helpers for parsing core properties. Based on HW registers layout. */ +#define VXD_GET_BITS(v, lb, rb, type) \ + ({ \ + type __rb = (rb); \ + (((v) >> (__rb)) & ((1 << ((lb) - __rb + 1)) - 1)); }) +#define VXD_GET_BIT(v, b) (((v) >> (b)) & 1) + +/* Get major core revision. */ +#define VXD_MAJ_REV(props) (VXD_GET_BITS((props).core_rev, 23, 16, unsigned int)) +/* Get minor core revision. */ +#define VXD_MIN_REV(props) (VXD_GET_BITS((props).core_rev, 15, 8, unsigned int)) +/* Get maint core revision. */ +#define VXD_MAINT_REV(props) (VXD_GET_BITS((props).core_rev, 7, 0, unsigned int)) +/* Get number of entropy pipes available (HEVC). */ +#define VXD_NUM_ENT_PIPES(props) ((props).pvdec_core_id & 0xF) +/* Get number of pixel pipes available (other standards). */ +#define VXD_NUM_PIX_PIPES(props) (((props).pvdec_core_id & 0xF0) >> 4) +/* Get number of bits used by external memory interface. */ +#define VXD_EXTRN_ADDR_WIDTH(props) ((((props).mmu_config0 & 0xF0) >> 4) + 32) + +/* Check whether specific standard is supported by the pixel pipe. */ +#define VXD_HAS_MPEG2(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 0) +#define VXD_HAS_MPEG4(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 1) +#define VXD_HAS_H264(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 2) +#define VXD_HAS_VC1(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 3) +#define VXD_HAS_WMV9(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 4) +#define VXD_HAS_JPEG(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 5) +#define VXD_HAS_MPEG4_DATA_PART(props, pipe) \ + VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 6) +#define VXD_HAS_AVS(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 7) +#define VXD_HAS_REAL(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 8) +#define VXD_HAS_VP6(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 9) +#define VXD_HAS_VP8(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 10) +#define VXD_HAS_SORENSON(props, pipe) \ + VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 11) +#define VXD_HAS_HEVC(props, pipe) VXD_GET_BIT(props.pixel_pipe_cfg[pipe], 22) + +/* Check whether specific feature is supported by the pixel pipe */ + +/* + * Max picture size for HEVC still picture profile is 64k wide and/or 64k + * high. + */ +#define VXD_HAS_HEVC_64K_STILL(props, pipe) \ + (VXD_GET_BIT((props).pixel_misc_cfg[pipe], 24)) + +/* Pixel processing pipe index. */ +#define VXD_PIX_PIPE_ID(props, pipe) \ + (VXD_GET_BITS((props).pixel_misc_cfg[pipe], 18, 16, unsigned int)) + +/* Number of stream supported by the pixel pipe DMAC and shift register. */ +#define VXD_PIX_NUM_STRS(props, pipe) \ + (VXD_GET_BITS((props).pixel_misc_cfg[pipe], 13, 12, unsigned int) + 1) + +/* Is scaling supported. */ +#define VXD_HAS_SCALING(props, pipe) \ + (VXD_GET_BIT((props).pixel_misc_cfg[pipe], 9)) + +/* Is rotation supported. */ +#define VXD_HAS_ROTATION(props, pipe) \ + (VXD_GET_BIT((props).pixel_misc_cfg[pipe], 8)) + +/* Are HEVC range extensions supported. */ +#define VXD_HAS_HEVC_REXT(props, pipe) \ + (VXD_GET_BIT((props).pixel_misc_cfg[pipe], 7)) + +/* Maximum bit depth supported by the pipe. */ +#define VXD_MAX_BIT_DEPTH(props, pipe) \ + (VXD_GET_BITS((props).pixel_misc_cfg[pipe], 6, 4, unsigned int) + 8) + +/* + * Maximum chroma fomar supported by the pipe in HEVC mode. + * 0x1 - 4:2:0 + * 0x2 - 4:2:2 + * 0x3 - 4:4:4 + */ +#define VXD_MAX_HEVC_CHROMA_FMT(props, pipe) \ + (VXD_GET_BITS((props).pixel_misc_cfg[pipe], 3, 2, unsigned int)) + +/* + * Maximum chroma format supported by the pipe in H264 mode. + * 0x1 - 4:2:0 + * 0x2 - 4:2:2 + * 0x3 - 4:4:4 + */ +#define VXD_MAX_H264_CHROMA_FMT(props, pipe) \ + (VXD_GET_BITS((props).pixel_misc_cfg[pipe], 1, 0, unsigned int)) + +/* + * Maximum frame width and height supported in MSVDX pipeline. + */ +#define VXD_MAX_WIDTH_MSVDX(props) \ + (2 << (VXD_GET_BITS((props).pixel_max_frame_cfg, 4, 0, unsigned int))) +#define VXD_MAX_HEIGHT_MSVDX(props) \ + (2 << (VXD_GET_BITS((props).pixel_max_frame_cfg, 12, 8, unsigned int))) + +/* + * Maximum frame width and height supported in PVDEC pipeline. + */ +#define VXD_MAX_WIDTH_PVDEC(props) \ + (2 << (VXD_GET_BITS((props).pixel_max_frame_cfg, 20, 16, unsigned int))) +#define VXD_MAX_HEIGHT_PVDEC(props) \ + (2 << (VXD_GET_BITS((props).pixel_max_frame_cfg, 28, 24, unsigned int))) + +#define PVDEC_COMMS_RAM_OFFSET 0x00002000 +#define PVDEC_COMMS_RAM_SIZE 0x00001000 +#define PVDEC_ENTROPY_OFFSET 0x00003000 +#define PVDEC_ENTROPY_SIZE 0x1FF +#define PVDEC_VEC_BE_OFFSET 0x00005000 +#define PVDEC_VEC_BE_SIZE 0x3FF +#define PVDEC_VEC_BE_CODEC_OFFSET 0x00005400 +#define MSVDX_VEC_OFFSET 0x00006000 +#define MSVDX_VEC_SIZE 0x7FF +#define MSVDX_CMD_OFFSET 0x00007000 + +/* + * Virtual memory heap address ranges for tiled + * and non-tiled buffers. Addresses within each + * range should be assigned to the appropriate + * buffers by the UM driver and mapped into the + * device using the corresponding KM driver ioctl. + */ +#define PVDEC_HEAP_UNTILED_START 0x00400000ul +#define PVDEC_HEAP_UNTILED_SIZE 0x3FC00000ul +#define PVDEC_HEAP_TILE512_START 0x40000000ul +#define PVDEC_HEAP_TILE512_SIZE 0x10000000ul +#define PVDEC_HEAP_TILE1024_START 0x50000000ul +#define PVDEC_HEAP_TILE1024_SIZE 0x20000000ul +#define PVDEC_HEAP_TILE2048_START 0x70000000ul +#define PVDEC_HEAP_TILE2048_SIZE 0x30000000ul +#define PVDEC_HEAP_TILE4096_START 0xA0000000ul +#define PVDEC_HEAP_TILE4096_SIZE 0x30000000ul +#define PVDEC_HEAP_BITSTREAM_START 0xD2000000ul +#define PVDEC_HEAP_BITSTREAM_SIZE 0x0A000000ul +#define PVDEC_HEAP_STREAM_START 0xE4000000ul +#define PVDEC_HEAP_STREAM_SIZE 0x1C000000ul + +/* + * Max size of the message payload, in bytes. There are 7 bits used to encode + * the message size in the firmware interface. + */ +#define VXD_MAX_PAYLOAD_SIZE (127 * sizeof(unsigned int)) +/* Max size of the input message in bytes. */ +#define VXD_MAX_INPUT_SIZE (VXD_MAX_PAYLOAD_SIZE + sizeof(struct vxd_fw_msg)) +/* + * Min size of the input message. Two words needed for message header and + * stream PTD + */ +#define VXD_MIN_INPUT_SIZE 2 +/* + * Offset of the stream PTD within message. This word has to be left null in + * submitted message, driver will fill it in with an appropriate value. + */ +#define VXD_PTD_MSG_OFFSET 1 + +/* Read flags */ +#define VXD_FW_MSG_RD_FLAGS_MASK 0xffff +/* Driver watchdog interrupted processing of the message. */ +#define VXD_FW_MSG_FLAG_DWR 0x1 +/* VXD MMU fault occurred when the message was processed. */ +#define VXD_FW_MSG_FLAG_MMU_FAULT 0x2 +/* Invalid input message, e.g. the message was too large. */ +#define VXD_FW_MSG_FLAG_INV 0x4 +/* I/O error occurred when the message was processed. */ +#define VXD_FW_MSG_FLAG_DEV_ERR 0x8 +/* + * Driver error occurred when the message was processed, e.g. failed to + * allocate memory. + */ +#define VXD_FW_MSG_FLAG_DRV_ERR 0x10 +/* + * Item was canceled, without being fully processed + * i.e. corresponding stream was destroyed. + */ +#define VXD_FW_MSG_FLAG_CANCELED 0x20 +/* Firmware internal error occurred when the message was processed */ +#define VXD_FW_MSG_FLAG_FATAL 0x40 + +/* Write flags */ +#define VXD_FW_MSG_WR_FLAGS_MASK 0xffff0000 +/* Indicates that message shall be dropped after sending it to the firmware. */ +#define VXD_FW_MSG_FLAG_DROP 0x10000 +/* + * Indicates that message shall be exclusively handled by + * the firmware/hardware. Any other pending messages are + * blocked until such message is handled. + */ +#define VXD_FW_MSG_FLAG_EXCL 0x20000 + +#define VXD_MSG_SIZE(msg) (sizeof(struct vxd_fw_msg) + ((msg).payload_size)) + +/* Header included at the beginning of firmware binary */ +struct vxd_fw_hdr { + unsigned int core_size; + unsigned int blob_size; + unsigned int firmware_id; + unsigned int timestamp; +}; + +/* + * struct vxd_dev_fw - Core component will allocate a buffer for firmware. + * This structure holds the information about the firmware + * binary. + * @buf_id: The buffer id allocation + * @hdr: firmware header information + * @fw_size: The size of the fw. Set after successful firmware request. + */ +struct vxd_dev_fw { + int buf_id; + struct vxd_fw_hdr *hdr; + unsigned int fw_size; + unsigned char ready; +}; + +/* + * struct vxd_core_props - contains HW core properties + * @core_rev: Core revision based on register CR_PVDEC_CORE_REV + * @pvdec_core_id: PVDEC Core id based on register CR_PVDEC_CORE_ID + * @mmu_config0: MMU configuration 0 based on register MMU_CONFIG0 + * @mmu_config1: MMU configuration 1 based on register MMU_CONFIG1 + * @mtx_ram_size: size of the MTX RAM based on register CR_PROC_DEBUG + * @pixel_max_frame_cfg: indicates the max frame height and width for + * PVDEC pipeline and MSVDX pipeline based on register + * MAX_FRAME_CONFIG + * @pixel_pipe_cfg: pipe configuration which codecs are supported in a + * Pixel Processing Pipe, based on register + * PIXEL_PIPE_CONFIG + * @pixel_misc_cfg: Additional pipe configuration eg. supported scaling + * or rotation, based on register PIXEL_MISC_CONFIG + * @dbg_fifo_size: contains the depth of the Debug FIFO, based on + * register CR_PROC_DEBUG_FIFO_SIZE + */ +struct vxd_core_props { + unsigned int core_rev; + unsigned int pvdec_core_id; + unsigned int mmu_config0; + unsigned int mmu_config1; + unsigned int mtx_ram_size; + unsigned int pixel_max_frame_cfg; + unsigned int pixel_pipe_cfg[VXD_MAX_PIPES]; + unsigned int pixel_misc_cfg[VXD_MAX_PIPES]; + unsigned int dbg_fifo_size; +}; + +struct vxd_alloc_data { + unsigned int heap_id; /* [IN] Heap ID of allocator */ + unsigned int size; /* [IN] Size of device memory (in bytes) */ + unsigned int attributes; /* [IN] Attributes of buffer */ + unsigned int buf_id; /* [OUT] Generated buffer ID */ +}; + +struct vxd_free_data { + unsigned int buf_id; /* [IN] ID of device buffer to free */ +}; +#endif /* _IMG_DEC_COMMON_H */ diff --git a/drivers/staging/media/vxd/decoder/vxd_pvdec.c b/drivers/staging/media/vxd/decoder/vxd_pvdec.c new file mode 100644 index 000000000000..c2b59c3dd164 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/vxd_pvdec.c @@ -0,0 +1,1745 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * IMG DEC PVDEC function implementations + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + * Prashanth Kumar Amai + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "img_dec_common.h" +#include "img_pvdec_test_regs.h" +#include "img_video_bus4_mmu_regs.h" +#include "vxd_pvdec_priv.h" +#include "vxd_pvdec_regs.h" + +#ifdef PVDEC_SINGLETHREADED_IO +static DEFINE_SPINLOCK(pvdec_irq_lock); +static ulong pvdec_irq_flags; +#endif + +static const ulong vxd_plat_poll_udelay = 100; + +/* This function will return reminder and quotient */ +static inline unsigned int do_divide(unsigned long long *n, unsigned int base) +{ + unsigned int remainder = *n % base; + *n = *n / base; + return remainder; +} + +/* + * Reads PROC_DEBUG register and provides number of MTX RAM banks + * and their size + */ +static int pvdec_get_mtx_ram_info(void __iomem *reg_base, int *bank_cnt, + unsigned long *bank_size, + unsigned long *last_bank_size) +{ + unsigned int ram_bank_count, reg; + + reg = VXD_RD_REG(reg_base, PVDEC_CORE, PROC_DEBUG); + ram_bank_count = VXD_RD_REG_FIELD(reg, PVDEC_CORE, PROC_DEBUG, MTX_RAM_BANKS); + if (!ram_bank_count) + return -EIO; + + if (bank_cnt) + *bank_cnt = ram_bank_count; + + if (bank_size) { + unsigned int ram_bank_size = VXD_RD_REG_FIELD(reg, PVDEC_CORE, + PROC_DEBUG, MTX_RAM_BANK_SIZE); + *bank_size = 1 << (ram_bank_size + 2); + } + + if (last_bank_size) { + unsigned int last_bank = VXD_RD_REG_FIELD(reg, PVDEC_CORE, PROC_DEBUG, + MTX_LAST_RAM_BANK_SIZE); + unsigned char new_representation = VXD_RD_REG_FIELD(reg, + PVDEC_CORE, PROC_DEBUG, MTX_RAM_NEW_REPRESENTATION); + if (new_representation) { + *last_bank_size = 1024 * last_bank; + } else { + *last_bank_size = 1 << (last_bank + 2); + if (bank_cnt && last_bank == 13 && *bank_cnt == 4) { + /* + * VXD hardware ambiguity: + * old cores confuse 120k and 128k + * So assume worst case. + */ + *last_bank_size -= 0x2000; + } + } + } + + return 0; +} + +/* Provides size of MTX RAM in bytes */ +static int pvdec_get_mtx_ram_size(void __iomem *reg_base, unsigned int *ram_size) +{ + int bank_cnt, ret; + unsigned long bank_size, last_bank_size; + + ret = pvdec_get_mtx_ram_info(reg_base, &bank_cnt, &bank_size, &last_bank_size); + if (ret) + return ret; + + *ram_size = (bank_cnt - 1) * bank_size + last_bank_size; + + return 0; +} + +/* Poll for single register-based transfer to/from MTX to complete */ +static unsigned int pvdec_wait_mtx_reg_access(void __iomem *reg_base, unsigned int *mtx_fault) +{ + unsigned int pvdec_timeout = PVDEC_TIMEOUT_COUNTER, reg; + + do { + /* Check MTX is OK */ + reg = VXD_RD_REG(reg_base, MTX_CORE, MTX_FAULT0); + if (reg != 0) { + *mtx_fault = reg; + return -EIO; + } + + pvdec_timeout--; + reg = VXD_RD_REG(reg_base, MTX_CORE, MTX_REG_READ_WRITE_REQUEST); + } while ((VXD_RD_REG_FIELD(reg, MTX_CORE, + MTX_REG_READ_WRITE_REQUEST, + MTX_DREADY) == 0) && + (pvdec_timeout != 0)); + + if (pvdec_timeout == 0) + return -EIO; + + return 0; +} + +static void pvdec_mtx_status_dump(void __iomem *reg_base, unsigned int *status) +{ + unsigned int reg; + + pr_debug("%s: *** dumping status ***\n", __func__); + +#define READ_MTX_REG(_NAME_) \ + do { \ + unsigned int val; \ + VXD_WR_REG(reg_base, MTX_CORE, \ + MTX_REG_READ_WRITE_REQUEST, reg); \ + if (pvdec_wait_mtx_reg_access(reg_base, ®)) { \ + pr_debug("%s: " \ + "MTX REG RD fault: 0x%08x\n", __func__, reg); \ + break; \ + } \ + val = VXD_RD_REG(reg_base, MTX_CORE, MTX_REG_READ_WRITE_DATA); \ + if (status) \ + *status++ = val; \ + pr_debug("%s: " _NAME_ ": 0x%08x\n", __func__, val); \ + } while (0) + + reg = 0; + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* Read */ + MTX_REG_READ_WRITE_REQUEST, MTX_RNW, 1); + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* PC or PCX */ + MTX_REG_READ_WRITE_REQUEST, MTX_USPECIFIER, 5); + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* PC */ + MTX_REG_READ_WRITE_REQUEST, MTX_RSPECIFIER, 0); + READ_MTX_REG("MTX PC"); + + reg = 0; + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* Read */ + MTX_REG_READ_WRITE_REQUEST, MTX_RNW, 1); + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* PC or PCX */ + MTX_REG_READ_WRITE_REQUEST, MTX_USPECIFIER, 5); + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* PCX */ + MTX_REG_READ_WRITE_REQUEST, MTX_RSPECIFIER, 1); + READ_MTX_REG("MTX PCX"); + + reg = 0; + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* Read */ + MTX_REG_READ_WRITE_REQUEST, MTX_RNW, 1); + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* A0StP */ + MTX_REG_READ_WRITE_REQUEST, MTX_USPECIFIER, 3); + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, + MTX_REG_READ_WRITE_REQUEST, MTX_RSPECIFIER, 0); + READ_MTX_REG("MTX A0STP"); + + reg = 0; + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* Read */ + MTX_REG_READ_WRITE_REQUEST, MTX_RNW, 1); + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, /* A0FrP */ + MTX_REG_READ_WRITE_REQUEST, MTX_USPECIFIER, 3); + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_REG_READ_WRITE_REQUEST, MTX_RSPECIFIER, 1); + READ_MTX_REG("MTX A0FRP"); +#undef PRINT_MTX_REG + + pr_debug("%s: *** status dump done ***\n", __func__); +} + +static void pvdec_prep_fw_upload(const void *dev, + void __iomem *reg_base, + struct vxd_ena_params *ena_params, + unsigned char dma_channel) +{ + unsigned int fw_vxd_virt_addr = ena_params->fw_buf_virt_addr; + unsigned int vxd_ptd_addr = ena_params->ptd; + unsigned int reg = 0; + int i; + unsigned int flags = PVDEC_FWFLAG_FORCE_FS_FLOW | + PVDEC_FWFLAG_DISABLE_GENC_FLUSHING | + PVDEC_FWFLAG_DISABLE_AUTONOMOUS_RESET | + PVDEC_FWFLAG_DISABLE_IDLE_GPIO | + PVDEC_FWFLAG_ENABLE_ERROR_CONCEALMENT; + + if (ena_params->secure) + flags |= PVDEC_FWFLAG_BIG_TO_HOST_BUFFER; + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: fw_virt: 0x%x, ptd: 0x%x, dma ch: %u, flags: 0x%x\n", + __func__, fw_vxd_virt_addr, vxd_ptd_addr, dma_channel, flags); +#endif + + /* Reset MTX */ + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SOFT_RESET, MTX_RESET, 1); + VXD_WR_REG(reg_base, MTX_CORE, MTX_SOFT_RESET, reg); + /* + * NOTE: The MTX reset bit is WRITE ONLY, so we cannot + * check the reset procedure has finished, thus BEWARE to put + * any MTX_CORE* access just after this line + */ + + /* Clear COMMS RAM header */ + for (i = 0; i < PVDEC_FW_COMMS_HDR_SIZE; i++) + VXD_WR_REG_ABS(reg_base, VLR_OFFSET + i * sizeof(unsigned int), 0); + + VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_FLAGS_OFFSET, flags); + /* Do not wait for debug FIFO flag - set it only when requested */ + VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_SIGNATURE_OFFSET, + !ena_params->wait_dbg_fifo); + + /* + * Clear the bypass bits and enable extended addressing in MMU. + * Firmware depends on this configuration, so we have to set it, + * even if firmware is being uploaded via registers. + */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_ADDRESS_CONTROL, UPPER_ADDR_FIXED, 0); + reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_ADDRESS_CONTROL, MMU_ENA_EXT_ADDR, 1); + reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_ADDRESS_CONTROL, MMU_BYPASS, 0); + VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_ADDRESS_CONTROL, reg); + + /* + * Buffer device virtual address. + * This is an address of a firmware blob, firmware reads this base + * address from DMAC_SETUP register and uses to load the modules, so it + * has to be set even when uploading the FW via registers. + */ + VXD_WR_RPT_REG(reg_base, DMAC, DMAC_SETUP, fw_vxd_virt_addr, dma_channel); + + /* + * Set base address of PTD. Same as before, has to be configured even + * when uploading the firmware via regs, FW uses it to execute DMA + * before switching to stream MMU context. + */ + VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_DIR_BASE_ADDR, vxd_ptd_addr); + + /* Configure MMU bank index - Use bank 0 */ + VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_BANK_INDEX, 0); + + /* Set the MTX timer divider register */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_TIMERDIV, TIMER_EN, 1); + /* + * Setting max freq - divide by 1 for better measurement accuracy + * during fw upload stage + */ + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_TIMERDIV, TIMER_DIV, 0); + VXD_WR_REG(reg_base, MTX_CORE, MTX_SYSC_TIMERDIV, reg); +} + +static int pvdec_check_fw_sig(void __iomem *reg_base) +{ + unsigned int fw_sig = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + + PVDEC_FW_SIGNATURE_OFFSET); + + if (fw_sig != PVDEC_FW_READY_SIG) + return -EIO; + + return 0; +} + +static void pvdec_kick_mtx(void __iomem *reg_base) +{ + unsigned int reg = 0; + + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_KICKI, MTX_KICKI, 1); + VXD_WR_REG(reg_base, MTX_CORE, MTX_KICKI, reg); +} + +static int pvdec_write_vlr(void __iomem *reg_base, const unsigned int *buf, + unsigned long size_dwrds, int off_dwrds) +{ + unsigned int i; + + if (((off_dwrds + size_dwrds) * sizeof(unsigned int)) > VLR_SIZE) + return -EINVAL; + + for (i = 0; i < size_dwrds; i++) { + int off = (off_dwrds + i) * sizeof(unsigned int); + + VXD_WR_REG_ABS(reg_base, (VLR_OFFSET + off), *buf); + buf++; + } + + return 0; +} + +static int pvdec_poll_fw_boot(void __iomem *reg_base, struct vxd_boot_poll_params *poll_params) +{ + unsigned int i; + + for (i = 0; i < 25; i++) { + if (!pvdec_check_fw_sig(reg_base)) + return 0; + usleep_range(100, 110); + } + for (i = 0; i < poll_params->msleep_cycles; i++) { + if (!pvdec_check_fw_sig(reg_base)) + return 0; + msleep(100); + } + return -EIO; +} + +static int pvdec_read_vlr(void __iomem *reg_base, unsigned int *buf, + unsigned long size_dwrds, int off_dwrds) +{ + unsigned int i; + + if (((off_dwrds + size_dwrds) * sizeof(unsigned int)) > VLR_SIZE) + return -EINVAL; + + for (i = 0; i < size_dwrds; i++) { + int off = (off_dwrds + i) * sizeof(unsigned int); + *buf++ = VXD_RD_REG_ABS(reg_base, (VLR_OFFSET + off)); + } + + return 0; +} + +/* Get configuration of a ring buffer used to send messages to the MTX */ +static int pvdec_get_to_mtx_cfg(void __iomem *reg_base, unsigned long *size, int *off, + unsigned int *wr_idx, unsigned int *rd_idx) +{ + unsigned int to_mtx_cfg; + int to_mtx_off, ret; + + ret = pvdec_check_fw_sig(reg_base); + if (ret) + return ret; + + to_mtx_cfg = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_BUF_CONF_OFFSET); + + *size = PVDEC_FW_COM_BUF_SIZE(to_mtx_cfg); + to_mtx_off = PVDEC_FW_COM_BUF_OFF(to_mtx_cfg); + + if (to_mtx_off % 4) + return -EIO; + + to_mtx_off /= sizeof(unsigned int); + *off = to_mtx_off; + + *wr_idx = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_WR_IDX_OFFSET); + *rd_idx = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_RD_IDX_OFFSET); + + if ((*rd_idx >= *size) || (*wr_idx >= *size)) + return -EIO; + + return 0; +} + +/* Submit a padding message to the host->MTX ring buffer */ +static int pvdec_send_pad_msg(void __iomem *reg_base) +{ + int ret, pad_size, to_mtx_off; /* offset in dwords */ + unsigned int wr_idx, rd_idx; /* indicies in dwords */ + unsigned long pad_msg_size = 1, to_mtx_size; /* size in dwords */ + const unsigned long max_msg_size = VXD_MAX_PAYLOAD_SIZE / sizeof(unsigned int); + unsigned int pad_msg; + + ret = pvdec_get_to_mtx_cfg(reg_base, &to_mtx_size, &to_mtx_off, &wr_idx, &rd_idx); + if (ret) + return ret; + + pad_size = to_mtx_size - wr_idx; /* size in dwords */ + + if (pad_size <= 0) { + VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_WR_IDX_OFFSET, 0); + return 0; + } + + while (pad_size > 0) { + int cur_pad_size = pad_size > max_msg_size ? + max_msg_size : pad_size; + + pad_msg = 0; + pad_msg = VXD_WR_REG_FIELD(pad_msg, PVDEC_FW, DEVA_GENMSG, MSG_SIZE, cur_pad_size); + pad_msg = VXD_WR_REG_FIELD(pad_msg, PVDEC_FW, DEVA_GENMSG, + MSG_TYPE, PVDEC_FW_MSG_TYPE_PADDING); + + ret = pvdec_write_vlr(reg_base, &pad_msg, pad_msg_size, to_mtx_off + wr_idx); + if (ret) + return ret; + + wr_idx += cur_pad_size; + + VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_WR_IDX_OFFSET, wr_idx); + + pad_size -= cur_pad_size; + + pvdec_kick_mtx(reg_base); + } + + wr_idx = 0; + VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_MTX_WR_IDX_OFFSET, wr_idx); + + return 0; +} + +/* + * Check if there is enough space in comms RAM to submit a + * dwords long message. Submit a padding message if necessary and requested. + * + * Returns 0 if there is space for a message. + * Returns -EINVAL when msg is too big or empty. + * Returns -EIO when there was a problem accessing the HW. + * Returns -EBUSY when there is not ennough space. + */ +static int pvdec_check_comms_space(void __iomem *reg_base, unsigned long msg_size, + unsigned char send_padding) +{ + int ret, to_mtx_off; /* offset in dwords */ + unsigned int wr_idx, rd_idx; /* indicies in dwords */ + unsigned long to_mtx_size; /* size in dwords */ + + ret = pvdec_get_to_mtx_cfg(reg_base, &to_mtx_size, &to_mtx_off, &wr_idx, &rd_idx); + if (ret) + return ret; + + /* Enormous or empty message, won't fit */ + if (msg_size >= to_mtx_size || !msg_size) + return -EINVAL; + + /* Buffer does not wrap */ + if (wr_idx >= rd_idx) { + /* Is there enough space to put the message? */ + if (wr_idx + msg_size < to_mtx_size) + return 0; + + if (!send_padding) + return -EBUSY; + + /* Check if it's ok to send a padding message */ + if (rd_idx == 0) + return -EBUSY; + + /* Send a padding message */ + ret = pvdec_send_pad_msg(reg_base); + if (ret) + return ret; + + /* + * And check if there's enough space at the beginning + * of a buffer + */ + if (msg_size >= rd_idx) + return -EBUSY; /* Not enough space at the beginning */ + + } else { /* Buffer wraps */ + if (wr_idx + msg_size >= rd_idx) + return -EBUSY; /* Not enough space! */ + } + + return 0; +} + +/* Get configuration of a ring buffer used to receive messages from the MTX */ +static int pvdec_get_to_host_cfg(void __iomem *reg_base, unsigned long *size, int *off, + unsigned int *wr_idx, unsigned int *rd_idx) +{ + unsigned int to_host_cfg; + int to_host_off, ret; + + ret = pvdec_check_fw_sig(reg_base); + if (ret) + return ret; + + to_host_cfg = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_HOST_BUF_CONF_OFFSET); + + *size = PVDEC_FW_COM_BUF_SIZE(to_host_cfg); + to_host_off = PVDEC_FW_COM_BUF_OFF(to_host_cfg); + + if (to_host_off % 4) + return -EIO; + + to_host_off /= sizeof(unsigned int); + *off = to_host_off; + + *wr_idx = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_HOST_WR_IDX_OFFSET); + *rd_idx = VXD_RD_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_TO_HOST_RD_IDX_OFFSET); + + if ((*rd_idx >= *size) || (*wr_idx >= *size)) + return -EIO; + + return 0; +} + +static void pvdec_select_pipe(void __iomem *reg_base, unsigned char pipe) +{ + unsigned int reg = 0; + + reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_HOST_PIPE_SELECT, PIPE_SEL, pipe); + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_HOST_PIPE_SELECT, reg); +} + +static void pvdec_pre_boot_setup(const void *dev, + void __iomem *reg_base, + struct vxd_ena_params *ena_params) +{ + /* Memory staller pre boot settings */ + if (ena_params->mem_staller.data) { + unsigned char size = ena_params->mem_staller.size; + + if (size == PVDEC_CORE_MEMSTALLER_ELEMENTS) { + unsigned int *data = ena_params->mem_staller.data; + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: Setting up memory staller", __func__); +#endif + /* + * Data structure represents PVDEC_TEST memory staller + * registers according to TRM 5.25 section + */ + VXD_WR_REG(reg_base, PVDEC_TEST, MEM_READ_LATENCY, data[0]); + VXD_WR_REG(reg_base, PVDEC_TEST, MEM_WRITE_RESPONSE_LATENCY, data[1]); + VXD_WR_REG(reg_base, PVDEC_TEST, MEM_CTRL, data[2]); + VXD_WR_REG(reg_base, PVDEC_TEST, RAND_STL_MEM_CMD_CONFIG, data[3]); + VXD_WR_REG(reg_base, PVDEC_TEST, RAND_STL_MEM_WDATA_CONFIG, data[4]); + VXD_WR_REG(reg_base, PVDEC_TEST, RAND_STL_MEM_WRESP_CONFIG, data[5]); + VXD_WR_REG(reg_base, PVDEC_TEST, RAND_STL_MEM_RDATA_CONFIG, data[6]); + } else { + dev_warn(dev, "%s: Wrong layout of mem staller config (%u)!", + __func__, size); + } + } +} + +static void pvdec_post_boot_setup(const void *dev, + void __iomem *reg_base, + unsigned int freq_khz) +{ + int reg; + + /* + * Configure VXD MMU to use video tiles (256x16) and unique + * strides per context as default. There is currently no + * override mechanism. + */ + reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL0); + reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL0, + MMU_TILING_SCHEME, 0); + reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL0, + USE_TILE_STRIDE_PER_CTX, 1); + VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL0, reg); + + /* + * Setup VXD MMU with the tile heap device virtual address + * ranges. + */ + VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MIN_ADDR, + PVDEC_HEAP_TILE512_START, 0); + VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MAX_ADDR, + PVDEC_HEAP_TILE512_START + PVDEC_HEAP_TILE512_SIZE - 1, 0); + VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MIN_ADDR, + PVDEC_HEAP_TILE1024_START, 1); + VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MAX_ADDR, + PVDEC_HEAP_TILE1024_START + PVDEC_HEAP_TILE1024_SIZE - 1, 1); + VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MIN_ADDR, + PVDEC_HEAP_TILE2048_START, 2); + VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MAX_ADDR, + PVDEC_HEAP_TILE2048_START + PVDEC_HEAP_TILE2048_SIZE - 1, 2); + VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MIN_ADDR, + PVDEC_HEAP_TILE4096_START, 3); + VXD_WR_RPT_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_TILE_MAX_ADDR, + PVDEC_HEAP_TILE4096_START + PVDEC_HEAP_TILE4096_SIZE - 1, 3); + + /* Disable timer */ + VXD_WR_REG(reg_base, MTX_CORE, MTX_SYSC_TIMERDIV, 0); + + reg = 0; + if (freq_khz) + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_TIMERDIV, TIMER_DIV, + PVDEC_CALC_TIMER_DIV(freq_khz / 1000)); + else + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_TIMERDIV, + TIMER_DIV, PVDEC_CLK_MHZ_DEFAULT - 1); + + /* Enable the MTX timer with final settings */ + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_TIMERDIV, TIMER_EN, 1); + VXD_WR_REG(reg_base, MTX_CORE, MTX_SYSC_TIMERDIV, reg); +} + +static void pvdec_clock_measure(void __iomem *reg_base, + struct timespec64 *start_time, + unsigned int *start_ticks) +{ + local_irq_disable(); + ktime_get_real_ts64(start_time); + *start_ticks = VXD_RD_REG(reg_base, MTX_CORE, MTX_SYSC_TXTIMER); + local_irq_enable(); +} + +static int pvdec_clock_calculate(const void *dev, + void __iomem *reg_base, + struct timespec64 *start_time, + unsigned int start_ticks, + unsigned int *freq_khz) +{ + struct timespec64 end_time, dif_time; + long long span_nsec = 0; + unsigned int stop_ticks, tot_ticks; + + local_irq_disable(); + ktime_get_real_ts64(&end_time); + + stop_ticks = VXD_RD_REG(reg_base, MTX_CORE, MTX_SYSC_TXTIMER); + local_irq_enable(); + + *(struct timespec64 *)(&dif_time) = timespec64_sub(*((struct timespec64 *)(&end_time)), + *((struct timespec64 *)(&start_time))); + + span_nsec = timespec64_to_ns((const struct timespec64 *)&dif_time); + + /* Sanity check for mtx timer */ + if (!stop_ticks || stop_ticks < start_ticks) { + dev_err(dev, "%s: invalid ticks (0x%x -> 0x%x)\n", + __func__, start_ticks, stop_ticks); + return -EIO; + } + tot_ticks = stop_ticks - start_ticks; + + if (span_nsec) { + unsigned long long res = (unsigned long long)tot_ticks * 1000000UL; + + do_divide(&res, span_nsec); + *freq_khz = (unsigned int)res; + if (*freq_khz < 1000) + *freq_khz = 1000; /* 1MHz */ + } else { + dev_err(dev, "%s: generic failure!\n", __func__); + *freq_khz = 0; + return -ERANGE; + } + + return 0; +} + +static int pvdec_wait_dma_done(const void *dev, + void __iomem *reg_base, + unsigned long size, + unsigned char dma_channel) +{ + unsigned int reg, timeout = PVDEC_TIMEOUT_COUNTER, prev_count, count = size; + + do { + usleep_range(300, 310); + prev_count = count; + reg = VXD_RD_RPT_REG(reg_base, DMAC, DMAC_COUNT, dma_channel); + count = VXD_RD_REG_FIELD(reg, DMAC, DMAC_COUNT, CNT); + /* Check for dma progress */ + if (count == prev_count) { + /* There could be a bus lag, protect against that */ + timeout--; + if (timeout == 0) { + dev_err(dev, "%s FW DMA failed! (0x%x)\n", __func__, count); + return -EIO; + } + } else { + /* Reset timeout counter */ + timeout = PVDEC_TIMEOUT_COUNTER; + } + } while (count > 0); + + return 0; +} + +static int pvdec_start_fw_dma(const void *dev, + void __iomem *reg_base, + unsigned char dma_channel, + unsigned long fw_buf_size, + unsigned int *freq_khz) +{ + unsigned int reg = 0; + int ret = 0; + + fw_buf_size = fw_buf_size / sizeof(unsigned int); +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: dma FW upload, fw_buf_size: %zu (dwords)\n", __func__, fw_buf_size); +#endif + + pvdec_select_pipe(reg_base, 1); + + reg = VXD_RD_REG(reg_base, PVDEC_PIXEL, PIXEL_MAN_CLK_ENA); + reg = VXD_WR_REG_FIELD(reg, PVDEC_PIXEL, PIXEL_MAN_CLK_ENA, PIXEL_DMAC_MAN_CLK_ENA, 1); + reg = VXD_WR_REG_FIELD(reg, PVDEC_PIXEL, PIXEL_MAN_CLK_ENA, PIXEL_REG_MAN_CLK_ENA, 1); + VXD_WR_REG(reg_base, PVDEC_PIXEL, PIXEL_MAN_CLK_ENA, reg); + + /* + * Setup MTX to receive DMA + * DMA transfers to/from the MTX have to be 32-bit aligned and + * in multiples of 32 bits + */ + VXD_WR_REG(reg_base, MTX_CORE, MTX_SYSC_CDMAA, 0); /* MTX: 0x80900000 */ + + reg = 0; + /* Burst size in multiples of 64 bits (allowed values are 2 or 4) */ + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_CDMAC, BURSTSIZE, 0); + /* 0 - write to MTX memory */ + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_CDMAC, RNW, 0); + /* Begin transfer */ + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_CDMAC, ENABLE, 1); + /* Transfer size */ + reg = VXD_WR_REG_FIELD(reg, MTX_CORE, MTX_SYSC_CDMAC, LENGTH, + ((fw_buf_size + 7) & (~7)) + 8); + VXD_WR_REG(reg_base, MTX_CORE, MTX_SYSC_CDMAC, reg); + + /* Boot MTX once transfer is done */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PROC_DMAC_CONTROL, + BOOT_ON_DMA_CH0, 1); + VXD_WR_REG(reg_base, PVDEC_CORE, PROC_DMAC_CONTROL, reg); + + /* Toggle channel 0 usage between MTX and other PVDEC peripherals */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, PVDEC_PIXEL, PIXEL_CONTROL_0, + DMAC_CH_SEL_FOR_MTX, 0); + VXD_WR_REG(reg_base, PVDEC_PIXEL, PIXEL_CONTROL_0, reg); + + /* Reset DMA channel first */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, SRST, 1); + VXD_WR_RPT_REG(reg_base, DMAC, DMAC_COUNT, reg, dma_channel); + + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, LIST_EN, 0); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, CNT, 0); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, EN, 0); + VXD_WR_RPT_REG(reg_base, DMAC, DMAC_COUNT, reg, dma_channel); + + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, SRST, 0); + VXD_WR_RPT_REG(reg_base, DMAC, DMAC_COUNT, reg, dma_channel); + + /* + * Setup a Simple DMA for Ch0 + * Specify the holdover period to use for the channel + */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PER_HOLD, PER_HOLD, 7); + VXD_WR_RPT_REG(reg_base, DMAC, DMAC_PER_HOLD, reg, dma_channel); + + /* Clear the DMAC Stats */ + VXD_WR_RPT_REG(reg_base, DMAC, DMAC_IRQ_STAT, 0, dma_channel); + + reg = 0; + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH_ADDR, ADDR, + MTX_CORE_MTX_SYSC_CDMAT_OFFSET); + VXD_WR_RPT_REG(reg_base, DMAC, DMAC_PERIPH_ADDR, reg, dma_channel); + + /* Clear peripheral register address */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH, ACC_DEL, 0); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH, INCR, DMAC_INCR_OFF); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH, BURST, DMAC_BURST_1); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH, EXT_BURST, DMAC_EXT_BURST_0); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_PERIPH, EXT_SA, 0); + VXD_WR_RPT_REG(reg_base, DMAC, DMAC_PERIPH, reg, dma_channel); + + /* + * Now start the transfer by setting the list enable bit in + * the count register + */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, TRANSFER_IEN, 1); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, PW, DMAC_PWIDTH_32_BIT); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, DIR, DMAC_MEM_TO_VXD); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, PI, DMAC_INCR_ON); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, LIST_FIN_CTL, 0); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, LIST_EN, 0); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, ENABLE_2D_MODE, 0); + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, CNT, fw_buf_size); + VXD_WR_RPT_REG(reg_base, DMAC, DMAC_COUNT, reg, dma_channel); + + reg = VXD_WR_REG_FIELD(reg, DMAC, DMAC_COUNT, EN, 1); + VXD_WR_RPT_REG(reg_base, DMAC, DMAC_COUNT, reg, dma_channel); + + /* NOTE: The MTX timer starts once DMA boot is triggered */ + { + struct timespec64 host_time; + unsigned int mtx_time; + + pvdec_clock_measure(reg_base, &host_time, &mtx_time); + + ret = pvdec_wait_dma_done(dev, reg_base, fw_buf_size, dma_channel); + if (!ret) { + if (pvdec_clock_calculate(dev, reg_base, &host_time, mtx_time, + freq_khz) < 0) + dev_dbg(dev, "%s: measure info not available!\n", __func__); + } + } + + return ret; +} + +static int pvdec_set_clocks(void __iomem *reg_base, unsigned int req_clocks) +{ + unsigned int clocks = 0, reg; + unsigned int pvdec_timeout; + + /* Turn on core clocks only */ + clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA, + PVDEC_REG_MAN_CLK_ENA, 1); + clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA, CORE_MAN_CLK_ENA, 1); + + /* Wait until core clocks set */ + pvdec_timeout = PVDEC_TIMEOUT_COUNTER; + do { + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_MAN_CLK_ENA, clocks); + udelay(vxd_plat_poll_udelay); + reg = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_MAN_CLK_ENA); + pvdec_timeout--; + } while (reg != clocks && pvdec_timeout != 0); + + if (pvdec_timeout == 0) + return -EIO; + + /* Write requested clocks */ + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_MAN_CLK_ENA, req_clocks); + + return 0; +} + +static int pvdec_enable_clocks(void __iomem *reg_base) +{ + unsigned int clocks = 0; + + clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA, + PVDEC_REG_MAN_CLK_ENA, 1); + clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA, + CORE_MAN_CLK_ENA, 1); + clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA, + MEM_MAN_CLK_ENA, 1); + clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA, + PROC_MAN_CLK_ENA, 1); + clocks = VXD_WR_REG_FIELD(clocks, PVDEC_CORE, PVDEC_MAN_CLK_ENA, + PIXEL_PROC_MAN_CLK_ENA, 1); + + return pvdec_set_clocks(reg_base, clocks); +} + +static int pvdec_disable_clocks(void __iomem *reg_base) +{ + return pvdec_set_clocks(reg_base, 0); +} + +static void pvdec_ena_mtx_int(void __iomem *reg_base) +{ + unsigned int reg = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA); + + reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_INT_STAT, HOST_PROC_IRQ, 1); + reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_INT_STAT, HOST_MMU_FAULT_IRQ, 1); + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA, reg); +} + +static void pvdec_check_mmu_requests(void __iomem *reg_base, + unsigned int mmu_checks, + unsigned int max_attempts) +{ + unsigned int reg, i, checks = 0; + + for (i = 0; i < max_attempts; i++) { + reg = VXD_RD_REG(reg_base, + IMG_VIDEO_BUS4_MMU, MMU_MEM_REQ); + reg = VXD_RD_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_MEM_REQ, TAG_OUTSTANDING); + if (reg) { + udelay(vxd_plat_poll_udelay); + continue; + } + + /* Read READ_WORDS_OUTSTANDING */ + reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_MEM_EXT_OUTSTANDING); + reg = VXD_RD_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_MEM_EXT_OUTSTANDING, + READ_WORDS); + if (!reg) { + checks++; + if (checks == mmu_checks) + break; + } else { /* Reset the counter and continue */ + checks = 0; + } + } + + if (checks != mmu_checks) + pr_warn("Checking for MMU outstanding requests failed!\n"); +} + +static int pvdec_reset(void __iomem *reg_base, unsigned char skip_pipe_clocks) +{ + unsigned int reg = 0; + unsigned char pipe, num_ent_pipes, num_pix_pipes; + unsigned int core_id, pvdec_timeout; + + core_id = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_CORE_ID); + + num_ent_pipes = VXD_RD_REG_FIELD(core_id, PVDEC_CORE, PVDEC_CORE_ID, ENT_PIPES); + num_pix_pipes = VXD_RD_REG_FIELD(core_id, PVDEC_CORE, PVDEC_CORE_ID, PIX_PIPES); + + if (num_pix_pipes == 0 || num_pix_pipes > VXD_MAX_PIPES) + return -EINVAL; + + /* Clear interrupt enabled flag */ + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA, 0); + + /* Clear any pending interrupt flags */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_INT_CLEAR, IRQ_CLEAR, 0xFFFF); + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_INT_CLEAR, reg); + + /* Turn all clocks on - don't touch reserved bits! */ + pvdec_set_clocks(reg_base, 0xFFFF0113); + + if (!skip_pipe_clocks) { + for (pipe = 1; pipe <= num_pix_pipes; pipe++) { + pvdec_select_pipe(reg_base, pipe); + /* Turn all available clocks on - skip reserved bits! */ + VXD_WR_REG(reg_base, PVDEC_PIXEL, PIXEL_MAN_CLK_ENA, 0xFFBF0FFF); + } + + for (pipe = 1; pipe <= num_ent_pipes; pipe++) { + pvdec_select_pipe(reg_base, pipe); + /* Turn all available clocks on - skip reserved bits! */ + VXD_WR_REG(reg_base, PVDEC_ENTROPY, ENTROPY_MAN_CLK_ENA, 0x5); + } + } + + /* 1st MMU outstanding requests check */ + pvdec_check_mmu_requests(reg_base, 1000, 2000); + + /* Make sure MMU is not under reset MMU_SOFT_RESET -> 0 */ + pvdec_timeout = PVDEC_TIMEOUT_COUNTER; + do { + reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1); + reg = VXD_RD_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_SOFT_RESET); + udelay(vxd_plat_poll_udelay); + pvdec_timeout--; + } while (reg != 0 && pvdec_timeout != 0); + + if (pvdec_timeout == 0) { + pr_err("Waiting for MMU soft reset(1) timed out!\n"); + pvdec_mtx_status_dump(reg_base, NULL); + } + + /* Write 1 to MMU_PAUSE_SET */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_PAUSE_SET, 1); + VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, reg); + + /* 2nd MMU outstanding requests check */ + pvdec_check_mmu_requests(reg_base, 100, 1000); + + /* Issue software reset for all but MMU/core */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_SOFT_RST, PVDEC_PIXEL_PROC_SOFT_RST, 0xFF); + reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_SOFT_RST, PVDEC_ENTROPY_SOFT_RST, 0xFF); + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_SOFT_RST, reg); + + VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_SOFT_RST); + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_SOFT_RST, 0); + + /* Write 1 to MMU_PAUSE_CLEAR in MMU_CONTROL1 reg */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_PAUSE_CLEAR, 1); + VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, reg); + + /* Confirm MMU_PAUSE_SET is cleared */ + pvdec_timeout = PVDEC_TIMEOUT_COUNTER; + do { + reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1); + reg = VXD_RD_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_PAUSE_SET); + udelay(vxd_plat_poll_udelay); + pvdec_timeout--; + } while (reg != 0 && pvdec_timeout != 0); + + if (pvdec_timeout == 0) { + pr_err("Waiting for MMU pause clear timed out!\n"); + pvdec_mtx_status_dump(reg_base, NULL); + return -EIO; + } + + /* Issue software reset for MMU */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_SOFT_RESET, 1); + VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, reg); + + /* Wait until MMU_SOFT_RESET -> 0 */ + pvdec_timeout = PVDEC_TIMEOUT_COUNTER; + do { + reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1); + reg = VXD_RD_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_SOFT_RESET); + udelay(vxd_plat_poll_udelay); + pvdec_timeout--; + } while (reg != 0 && pvdec_timeout != 0); + + if (pvdec_timeout == 0) { + pr_err("Waiting for MMU soft reset(2) timed out!\n"); + pvdec_mtx_status_dump(reg_base, NULL); + } + + /* Issue software reset for entire PVDEC */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_SOFT_RST, PVDEC_SOFT_RST, 0x1); + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_SOFT_RST, reg); + + /* Waiting for reset bit to be cleared */ + pvdec_timeout = PVDEC_TIMEOUT_COUNTER; + do { + reg = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_SOFT_RST); + reg = VXD_RD_REG_FIELD(reg, PVDEC_CORE, PVDEC_SOFT_RST, PVDEC_SOFT_RST); + udelay(vxd_plat_poll_udelay); + pvdec_timeout--; + } while (reg != 0 && pvdec_timeout != 0); + + if (pvdec_timeout == 0) { + pr_err("Waiting for PVDEC soft reset timed out!\n"); + pvdec_mtx_status_dump(reg_base, NULL); + return -EIO; + } + + /* Clear interrupt enabled flag */ + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA, 0); + + /* Clear any pending interrupt flags */ + reg = 0; + reg = VXD_WR_REG_FIELD(reg, PVDEC_CORE, PVDEC_INT_CLEAR, IRQ_CLEAR, 0xFFFF); + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_INT_CLEAR, reg); + return 0; +} + +static int pvdec_get_properties(void __iomem *reg_base, + struct vxd_core_props *props) +{ + unsigned int major, minor, maint, group_id, core_id; + unsigned char num_pix_pipes, pipe; + + if (!props) + return -EINVAL; + + /* PVDEC Core Revision Information */ + props->core_rev = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_CORE_REV); + major = VXD_RD_REG_FIELD(props->core_rev, PVDEC_CORE, PVDEC_CORE_REV, PVDEC_MAJOR_REV); + minor = VXD_RD_REG_FIELD(props->core_rev, PVDEC_CORE, PVDEC_CORE_REV, PVDEC_MINOR_REV); + maint = VXD_RD_REG_FIELD(props->core_rev, PVDEC_CORE, PVDEC_CORE_REV, PVDEC_MAINT_REV); + + /* Core ID */ + props->pvdec_core_id = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_CORE_ID); + group_id = VXD_RD_REG_FIELD(props->pvdec_core_id, PVDEC_CORE, PVDEC_CORE_ID, GROUP_ID); + core_id = VXD_RD_REG_FIELD(props->pvdec_core_id, PVDEC_CORE, PVDEC_CORE_ID, CORE_ID); + + /* Ensure that the core is IMG Video Decoder (PVDEC). */ + if (group_id != 3 || core_id != 3) { + pr_err("Wrong core revision %d.%d.%d !!!\n", major, minor, maint); + return -EIO; + } + + props->mmu_config0 = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONFIG0); + props->mmu_config1 = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONFIG1); + + num_pix_pipes = VXD_NUM_PIX_PIPES(*props); + + if (unlikely(num_pix_pipes > VXD_MAX_PIPES)) { + pr_warn("Too many pipes detected!\n"); + num_pix_pipes = VXD_MAX_PIPES; + } + + for (pipe = 1; pipe <= num_pix_pipes; ++pipe) { + pvdec_select_pipe(reg_base, pipe); + if (pipe < VXD_MAX_PIPES) { + props->pixel_pipe_cfg[pipe - 1] = + VXD_RD_REG(reg_base, PVDEC_PIXEL, PIXEL_PIPE_CONFIG); + props->pixel_misc_cfg[pipe - 1] = + VXD_RD_REG(reg_base, PVDEC_PIXEL, PIXEL_MISC_CONFIG); + /* + * Detect pipe access problems. + * Pipe config shall always indicate + * a non zero value (at least one standard supported)! + */ + if (!props->pixel_pipe_cfg[pipe - 1]) + pr_warn("Pipe config info is wrong!\n"); + } + } + + pvdec_select_pipe(reg_base, 1); + props->pixel_max_frame_cfg = VXD_RD_REG(reg_base, PVDEC_PIXEL, MAX_FRAME_CONFIG); + + { + unsigned int fifo_ctrl = VXD_RD_REG(reg_base, PVDEC_CORE, PROC_DBG_FIFO_CTRL0); + + props->dbg_fifo_size = VXD_RD_REG_FIELD(fifo_ctrl, + PVDEC_CORE, + PROC_DBG_FIFO_CTRL0, + PROC_DBG_FIFO_SIZE); + } + + return 0; +} + +int vxd_pvdec_init(const void *dev, void __iomem *reg_base) +{ + int ret; + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: trying to reset VXD, reg base: %p\n", __func__, reg_base); +#endif + + ret = pvdec_enable_clocks(reg_base); + if (ret) { + dev_err(dev, "%s: failed to enable clocks!\n", __func__); + return ret; + } + + ret = pvdec_reset(reg_base, FALSE); + if (ret) { + dev_err(dev, "%s: VXD reset failed!\n", __func__); + return ret; + } + + pvdec_ena_mtx_int(reg_base); + + return 0; +} + +/* Send dwords long message */ +int vxd_pvdec_send_msg(const void *dev, + void __iomem *reg_base, + unsigned int *msg, + unsigned long msg_size, + unsigned short msg_id, + struct vxd_dev *ctx) +{ + int ret, to_mtx_off; /* offset in dwords */ + unsigned int wr_idx, rd_idx; /* indicies in dwords */ + unsigned long to_mtx_size; /* size in dwords */ + unsigned int msg_wrd; + struct timespec64 time; + static int cnt; + + ktime_get_real_ts64(&time); + + ctx->time_fw[cnt].start_time = timespec64_to_ns((const struct timespec64 *)&time); + ctx->time_fw[cnt].id = msg_id; + cnt++; + + if (cnt >= ARRAY_SIZE(ctx->time_fw)) + cnt = 0; + + ret = pvdec_get_to_mtx_cfg(reg_base, &to_mtx_size, &to_mtx_off, &wr_idx, &rd_idx); + if (ret) { + dev_err(dev, "%s: failed to obtain mtx ring buffer config!\n", __func__); + return ret; + } + + /* populate the size and id fields in the message header */ + msg_wrd = VXD_RD_MSG_WRD(msg, PVDEC_FW, DEVA_GENMSG); + msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_GENMSG, MSG_SIZE, msg_size); + msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_GENMSG, MSG_ID, msg_id); + VXD_WR_MSG_WRD(msg, PVDEC_FW, DEVA_GENMSG, msg_wrd); + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: [msg out] size: %zu, id: 0x%x, type: 0x%x\n", __func__, msg_size, msg_id, + VXD_RD_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_GENMSG, MSG_TYPE)); + dev_dbg(dev, "%s: to_mtx: (%zu @ %d), wr_idx: %d, rd_idx: %d\n", + __func__, to_mtx_size, to_mtx_off, wr_idx, rd_idx); +#endif + + ret = pvdec_check_comms_space(reg_base, msg_size, FALSE); + if (ret) { + dev_err(dev, "%s: invalid message or not enough space (%d)!\n", __func__, ret); + return ret; + } + + ret = pvdec_write_vlr(reg_base, msg, msg_size, to_mtx_off + wr_idx); + if (ret) { + dev_err(dev, "%s: failed to write msg to vlr!\n", __func__); + return ret; + } + + wr_idx += msg_size; + if (wr_idx == to_mtx_size) + wr_idx = 0; + VXD_WR_REG_ABS(reg_base, VLR_OFFSET + + PVDEC_FW_TO_MTX_WR_IDX_OFFSET, wr_idx); + + pvdec_kick_mtx(reg_base); + + return 0; +} + +/* Fetch size (in dwords) of message pending from MTX */ +int vxd_pvdec_pend_msg_info(const void *dev, void __iomem *reg_base, + unsigned long *size, + unsigned short *msg_id, + unsigned char *not_last_msg) +{ + int ret, to_host_off; /* offset in dwords */ + unsigned int wr_idx, rd_idx; /* indicies in dwords */ + unsigned long to_host_size; /* size in dwords */ + unsigned int val = 0; + + ret = pvdec_get_to_host_cfg(reg_base, &to_host_size, &to_host_off, &wr_idx, &rd_idx); + if (ret) { + dev_err(dev, "%s: failed to obtain host ring buffer config!\n", __func__); + return ret; + } + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: to host: (%zu @ %d), wr: %u, rd: %u\n", __func__, + to_host_size, to_host_off, wr_idx, rd_idx); +#endif + + if (wr_idx == rd_idx) { + *size = 0; + *msg_id = 0; + return 0; + } + + ret = pvdec_read_vlr(reg_base, &val, 1, to_host_off + rd_idx); + if (ret) { + dev_err(dev, "%s: failed to read first word!\n", __func__); + return ret; + } + + *size = VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, MSG_SIZE); + *msg_id = VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, MSG_ID); + *not_last_msg = VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, NOT_LAST_MSG); + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: [msg in] rd_idx: %d, size: %zu, id: 0x%04x, type: 0x%x\n", + __func__, rd_idx, *size, *msg_id, + VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, MSG_TYPE)); +#endif + + return 0; +} + +/* + * Receive message from the MTX and place it in a dwords long + * buffer. If the provided buffer is too small to hold the message, only part + * of it will be placed in a buffer, but the ring buffer read index will be + * moved so that message is no longer available. + */ +int vxd_pvdec_recv_msg(const void *dev, void __iomem *reg_base, + unsigned int *buf, + unsigned long buf_size, + struct vxd_dev *vxd) +{ + int ret, to_host_off; /* offset in dwords */ + unsigned int wr_idx, rd_idx; /* indicies in dwords */ + unsigned long to_host_size, msg_size, to_read; /* sizes in dwords */ + unsigned int val = 0; + struct timespec64 time; + unsigned short msg_id; + int loop; + + ret = pvdec_get_to_host_cfg(reg_base, &to_host_size, + &to_host_off, &wr_idx, &rd_idx); + if (ret) { + dev_err(dev, "%s: failed to obtain host ring buffer config!\n", __func__); + return ret; + } + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: to host: (%zu @ %d), wr: %u, rd: %u\n", __func__, + to_host_size, to_host_off, wr_idx, rd_idx); +#endif + + /* Obtain the message size */ + ret = pvdec_read_vlr(reg_base, &val, 1, to_host_off + rd_idx); + if (ret) { + dev_err(dev, "%s: failed to read first word!\n", __func__); + return ret; + } + msg_size = VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, MSG_SIZE); + + to_read = (msg_size > buf_size) ? buf_size : msg_size; + + /* Does the message wrap? */ + if (to_read + rd_idx > to_host_size) { + unsigned long chunk_size = to_host_size - rd_idx; + + ret = pvdec_read_vlr(reg_base, buf, chunk_size, to_host_off + rd_idx); + if (ret) { + dev_err(dev, "%s: failed to read chunk before wrap!\n", __func__); + return ret; + } + to_read -= chunk_size; + buf += chunk_size; + rd_idx = 0; + msg_size -= chunk_size; + } + + /* + * If the message wrapped, read the second chunk. + * If it didn't, read first and only chunk + */ + ret = pvdec_read_vlr(reg_base, buf, to_read, to_host_off + rd_idx); + if (ret) { + dev_err(dev, "%s: failed to read message from vlr!\n", __func__); + return ret; + } + + /* Update read index in the ring buffer */ + rd_idx = (rd_idx + msg_size) % to_host_size; + VXD_WR_REG_ABS(reg_base, VLR_OFFSET + + PVDEC_FW_TO_HOST_RD_IDX_OFFSET, rd_idx); + + msg_id = VXD_RD_REG_FIELD(val, PVDEC_FW, DEVA_GENMSG, MSG_ID); + + ktime_get_real_ts64(&time); + for (loop = 0; loop < ARRAY_SIZE(vxd->time_fw); loop++) { + if (vxd->time_fw[loop].id == msg_id) { + vxd->time_fw[loop].end_time = + timespec64_to_ns((const struct timespec64 *)&time); +#ifdef DEBUG_DECODER_DRIVER + dev_info(dev, "fw decode time is %llu us for msg_id x%0x\n", + div_s64(vxd->time_fw[loop].end_time - + vxd->time_fw[loop].start_time, 1000), msg_id); +#endif + break; + } + } + + if (loop == ARRAY_SIZE(vxd->time_fw)) + dev_err(dev, "fw decode time for msg_id x%0x is not measured\n", msg_id); + + return 0; +} + +int vxd_pvdec_check_fw_status(const void *dev, void __iomem *reg_base) +{ + int ret; + unsigned int val = 0; + + /* Obtain current fw status */ + ret = pvdec_read_vlr(reg_base, &val, 1, PVDEC_FW_STATUS_OFFSET); + if (ret) { + dev_err(dev, "%s: failed to read fw status!\n", __func__); + return ret; + } + + /* Check for fatal condition */ + if (val == PVDEC_FW_STATUS_PANIC || val == PVDEC_FW_STATUS_ASSERT || + val == PVDEC_FW_STATUS_SO) + return -1; + + return 0; +} + +static int pvdec_send_init_msg(const void *dev, + void __iomem *reg_base, + struct vxd_ena_params *ena_params) +{ + unsigned short msg_id = 0; + unsigned int msg[PVDEC_FW_DEVA_INIT_MSG_WRDS] = { 0 }, msg_wrd = 0; + struct vxd_dev *vxd; + int ret; + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: rendec: %d@0x%x, crc: 0x%x\n", __func__, + ena_params->rendec_size, ena_params->rendec_addr, ena_params->crc); +#endif + + vxd = kzalloc(sizeof(*vxd), GFP_KERNEL); + if (!vxd) + return -1; + + /* message type */ + msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_GENMSG, MSG_TYPE, + PVDEC_FW_MSG_TYPE_INIT); + VXD_WR_MSG_WRD(msg, PVDEC_FW, DEVA_GENMSG, msg_wrd); + + /* rendec address */ + VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, RENDEC_ADDR0, ena_params->rendec_addr); + + /* rendec size */ + msg_wrd = 0; + msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_INIT, RENDEC_SIZE0, + ena_params->rendec_size); + VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, RENDEC_SIZE0, msg_wrd); + + /* HEVC configuration */ + msg_wrd = 0; + msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_INIT, + HEVC_CFG_MAX_H_FOR_PIPE_WAIT, 0xFFFF); + VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, HEVC_CFG, msg_wrd); + + /* signature select */ + VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, SIG_SELECT, ena_params->crc); + + /* partial frame notification timer divider */ + msg_wrd = 0; + msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_INIT, PFNT_DIV, PVDEC_PFNT_DIV); + VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, PFNT_DIV, msg_wrd); + + /* firmware watchdog timeout value */ + msg_wrd = VXD_WR_REG_FIELD(msg_wrd, PVDEC_FW, DEVA_INIT, FWWDT_MS, ena_params->fwwdt_ms); + VXD_WR_MSG_WRD(msg, PVDEC_FW_DEVA_INIT, FWWDT_MS, msg_wrd); + + ret = vxd_pvdec_send_msg(dev, reg_base, msg, ARRAY_SIZE(msg), msg_id, vxd); + kfree(vxd); + + return ret; +} + +int vxd_pvdec_ena(const void *dev, void __iomem *reg_base, + struct vxd_ena_params *ena_params, + struct vxd_fw_hdr *fw_hdr, + unsigned int *freq_khz) +{ + int ret; + unsigned int mtx_ram_size = 0; + unsigned char dma_channel = 0; + + ret = vxd_pvdec_init(dev, reg_base); + if (ret) { + dev_err(dev, "%s: PVDEC init failed!\n", __func__); + return ret; + } + + ret = pvdec_get_mtx_ram_size(reg_base, &mtx_ram_size); + if (ret) { + dev_err(dev, "%s: failed to get MTX RAM size!\n", __func__); + return ret; + } + + if (mtx_ram_size < fw_hdr->core_size) { + dev_err(dev, "%s: FW larger than MTX RAM size (%u < %d)!\n", + __func__, mtx_ram_size, fw_hdr->core_size); + return -EINVAL; + } + + /* Apply pre boot settings - if any */ + pvdec_pre_boot_setup(dev, reg_base, ena_params); + + pvdec_prep_fw_upload(dev, reg_base, ena_params, dma_channel); + + ret = pvdec_start_fw_dma(dev, reg_base, dma_channel, fw_hdr->core_size, freq_khz); + + if (ret) { + dev_err(dev, "%s: failed to load FW! (%d)", __func__, ret); + pvdec_mtx_status_dump(reg_base, NULL); + return ret; + } + + /* Apply final settings - if any */ + pvdec_post_boot_setup(dev, reg_base, *freq_khz); + + ret = pvdec_poll_fw_boot(reg_base, &ena_params->boot_poll); + if (ret) { + dev_err(dev, "%s: FW failed to boot! (%d)!\n", __func__, ret); + return ret; + } + + ret = pvdec_send_init_msg(dev, reg_base, ena_params); + if (ret) { + dev_err(dev, "%s: failed to send init message! (%d)!\n", __func__, ret); + return ret; + } + + return 0; +} + +int vxd_pvdec_dis(const void *dev, void __iomem *reg_base) +{ + int ret = pvdec_enable_clocks(reg_base); + + if (ret) { + dev_err(dev, "%s: failed to enable clocks! (%d)\n", __func__, ret); + return ret; + } + + ret = pvdec_reset(reg_base, TRUE); + if (ret) { + dev_err(dev, "%s: VXD reset failed! (%d)\n", __func__, ret); + return ret; + } + + ret = pvdec_disable_clocks(reg_base); + if (ret) { + dev_err(dev, "%s: VXD disable clocks failed! (%d)\n", __func__, ret); + return ret; + } + + return 0; +} + +/* + * Invalidate VXD's MMU cache. + */ +int vxd_pvdec_mmu_flush(const void *dev, void __iomem *reg_base) +{ + unsigned int reg = VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1); + + if (reg == PVDEC_INVALID_HW_STATE) { + dev_err(dev, "%s: invalid HW state!\n", __func__); + return -EIO; + } + + reg = VXD_WR_REG_FIELD(reg, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, MMU_INVALDC, 0xF); + VXD_WR_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_CONTROL1, reg); + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: device MMU cache invalidated!\n", __func__); +#endif + + return 0; +} + +irqreturn_t vxd_pvdec_clear_int(void __iomem *reg_base, unsigned int *irq_status) +{ + irqreturn_t ret = IRQ_NONE; + unsigned int enabled; + unsigned int status = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_INT_STAT); + + enabled = VXD_RD_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA); + + status &= enabled; + /* Store the last irq status */ + *irq_status |= status; + + if (status & (PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_MASK | + PVDEC_CORE_PVDEC_INT_STAT_HOST_PROC_IRQ_MASK)) + ret = IRQ_WAKE_THREAD; + + /* Disable MMU interrupts - clearing is not enough */ + if (status & PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_MASK) { + enabled &= ~PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_MASK; + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_HOST_INT_ENA, enabled); + } + + VXD_WR_REG(reg_base, PVDEC_CORE, PVDEC_INT_CLEAR, status); + + return ret; +} + +/* + * Check if there's enough space in comms RAM to submit dwords long + * message. This function also submits a padding message if it will be + * necessary for this particular message. + * + * return 0 if there is enough space, + * return -EBUSY if there is not enough space, + * return another fault code in case of an error. + */ +int vxd_pvdec_msg_fit(const void *dev, void __iomem *reg_base, unsigned long msg_size) +{ + int ret = pvdec_check_comms_space(reg_base, msg_size, TRUE); + + /* + * In specific environment, when to_mtx buffer is small, and messages + * the userspace is submitting are large (e.g. FWBSP flow), it's + * possible that firmware will consume the padding message sent by + * vxd_pvdec_msg_fit() immediately. Retry the check. + */ + if (ret == -EBUSY) { + unsigned int flags = VXD_RD_REG_ABS(reg_base, + VLR_OFFSET + PVDEC_FW_FLAGS_OFFSET) | + PVDEC_FWFLAG_FAKE_COMPLETION; + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "comms space full, asking fw to send empty msg when space is available"); +#endif + + VXD_WR_REG_ABS(reg_base, VLR_OFFSET + PVDEC_FW_FLAGS_OFFSET, flags); + ret = pvdec_check_comms_space(reg_base, msg_size, FALSE); + } + + return ret; +} + +void vxd_pvdec_get_state(const void *dev, void __iomem *reg_base, + unsigned int num_pipes, + struct vxd_hw_state *state) +{ + unsigned char pipe; +#ifdef DEBUG_DECODER_DRIVER + unsigned int state_cfg = VXD_RD_REG_ABS(reg_base, (VLR_OFFSET + + PVDEC_FW_STATE_BUF_CFG_OFFSET)); + + unsigned short state_size = PVDEC_FW_COM_BUF_SIZE(state_cfg); + unsigned short state_off = PVDEC_FW_COM_BUF_OFF(state_cfg); + + /* + * The generic fw progress counter + * is the first element in the fw state + */ + dev_dbg(dev, "%s: state off: 0x%x, size: 0x%x\n", __func__, state_off, state_size); + state->fw_counter = VXD_RD_REG_ABS(reg_base, (VLR_OFFSET + state_off)); + dev_dbg(dev, "%s: fw_counter: 0x%x\n", __func__, state->fw_counter); +#endif + + /* We just combine the macroblocks being processed by the HW */ + for (pipe = 0; pipe < num_pipes; pipe++) { + unsigned int p_off = VXD_GET_PIPE_OFF(num_pipes, pipe + 1); + unsigned int reg_val; + + /* Front-end */ + unsigned int reg_off = VXD_GET_REG_OFF(PVDEC_ENTROPY, ENTROPY_LAST_MB); + + state->fe_status[pipe] = VXD_RD_REG_ABS(reg_base, reg_off + p_off); + + reg_off = VXD_GET_REG_OFF(MSVDX_VEC, VEC_ENTDEC_INFORMATION); + state->fe_status[pipe] |= VXD_RD_REG_ABS(reg_base, reg_off + p_off); + + /* Back-end */ + reg_off = VXD_GET_REG_OFF(PVDEC_VEC_BE, VEC_BE_STATUS); + state->be_status[pipe] = VXD_RD_REG_ABS(reg_base, reg_off + p_off); + reg_off = VXD_GET_REG_OFF(MSVDX_VDMC, VDMC_MACROBLOCK_NUMBER); + state->be_status[pipe] |= VXD_RD_REG_ABS(reg_base, reg_off + p_off); + + /* + * Take DMAC channels 2/3 into consideration to cover + * parser progress on SR1/2 + */ + reg_off = VXD_GET_RPT_REG_OFF(DMAC, DMAC_COUNT, 2); + reg_val = VXD_RD_REG_ABS(reg_base, reg_off + p_off); + state->dmac_status[pipe][0] = VXD_RD_REG_FIELD(reg_val, DMAC, DMAC_COUNT, CNT); + reg_off = VXD_GET_RPT_REG_OFF(DMAC, DMAC_COUNT, 3); + reg_val = VXD_RD_REG_ABS(reg_base, reg_off + p_off); + state->dmac_status[pipe][1] = VXD_RD_REG_FIELD(reg_val, DMAC, DMAC_COUNT, CNT); + } +} + +/* + * Check for the source of the last interrupt. + * + * return 0 if nothing serious happened, + * return -EFAULT if there was a critical interrupt detected. + */ +int vxd_pvdec_check_irq(const void *dev, void __iomem *reg_base, unsigned int irq_status) +{ + if (irq_status & PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_MASK) { + unsigned int status0 = + VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_STATUS0); + unsigned int status1 = + VXD_RD_REG(reg_base, IMG_VIDEO_BUS4_MMU, MMU_STATUS1); + + unsigned int addr = VXD_RD_REG_FIELD(status0, IMG_VIDEO_BUS4_MMU, + MMU_STATUS0, MMU_FAULT_ADDR) << 12; + unsigned char reason = VXD_RD_REG_FIELD(status0, IMG_VIDEO_BUS4_MMU, + MMU_STATUS0, MMU_PF_N_RW); + unsigned char requestor = VXD_RD_REG_FIELD(status1, IMG_VIDEO_BUS4_MMU, + MMU_STATUS1, MMU_FAULT_REQ_ID); + unsigned char type = VXD_RD_REG_FIELD(status1, IMG_VIDEO_BUS4_MMU, + MMU_STATUS1, MMU_FAULT_RNW); + unsigned char secure = VXD_RD_REG_FIELD(status0, IMG_VIDEO_BUS4_MMU, + MMU_STATUS0, MMU_SECURE_FAULT); + +#ifdef DEBUG_DECODER_DRIVER + dev_dbg(dev, "%s: MMU Page Fault s0:%08x s1:%08x", __func__, status0, status1); +#endif + + dev_err(dev, "%s: MMU %s fault from %s while %s @ 0x%08X", __func__, + (reason) ? "Page" : "Protection", + (requestor & (0x1)) ? "dmac" : + (requestor & (0x2)) ? "vec" : + (requestor & (0x4)) ? "vdmc" : + (requestor & (0x8)) ? "vdeb" : "unknown source", + (type) ? "reading" : "writing", addr); + + if (secure) + dev_err(dev, "%s: MMU security policy violation detected!", __func__); + + return -EFAULT; + } + + return 0; +} + +/* + * This functions enables the clocks, fetches the core properties, stores them + * in the structure and DISABLES the clocks. Do not call when hardware + * is busy! + */ +int vxd_pvdec_get_props(const void *dev, void __iomem *reg_base, struct vxd_core_props *props) +{ +#ifdef DEBUG_DECODER_DRIVER + unsigned char num_pix_pipes, pipe; +#endif + int ret = pvdec_enable_clocks(reg_base); + + if (ret) { + dev_err(dev, "%s: failed to enable clocks!\n", __func__); + return ret; + } + + ret = pvdec_get_mtx_ram_size(reg_base, &props->mtx_ram_size); + if (ret) { + dev_err(dev, "%s: failed to get MTX ram size!\n", __func__); + return ret; + } + + ret = pvdec_get_properties(reg_base, props); + if (ret) { + dev_err(dev, "%s: failed to get VXD props!\n", __func__); + return ret; + } + + if (pvdec_disable_clocks(reg_base)) + dev_err(dev, "%s: failed to disable clocks!\n", __func__); + +#ifdef DEBUG_DECODER_DRIVER + num_pix_pipes = VXD_NUM_PIX_PIPES(*props); + + /* Warning already raised in pvdec_get_properties() */ + if (unlikely(num_pix_pipes > VXD_MAX_PIPES)) + num_pix_pipes = VXD_MAX_PIPES; + dev_dbg(dev, "%s: core_rev: 0x%08x\n", __func__, props->core_rev); + dev_dbg(dev, "%s: pvdec_core_id: 0x%08x\n", __func__, props->pvdec_core_id); + dev_dbg(dev, "%s: mmu_config0: 0x%08x\n", __func__, props->mmu_config0); + dev_dbg(dev, "%s: mmu_config1: 0x%08x\n", __func__, props->mmu_config1); + dev_dbg(dev, "%s: mtx_ram_size: %u\n", __func__, props->mtx_ram_size); + dev_dbg(dev, "%s: pix max frame: 0x%08x\n", __func__, props->pixel_max_frame_cfg); + + for (pipe = 1; pipe <= num_pix_pipes; ++pipe) + dev_dbg(dev, "%s: pipe %u, 0x%08x, misc 0x%08x\n", + __func__, pipe, props->pixel_pipe_cfg[pipe - 1], + props->pixel_misc_cfg[pipe - 1]); + dev_dbg(dev, "%s: dbg fifo size: %u\n", __func__, props->dbg_fifo_size); +#endif + return 0; +} diff --git a/drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h b/drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h new file mode 100644 index 000000000000..6cc9aef45904 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h @@ -0,0 +1,126 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * VXD PVDEC Private header file + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + * Prashanth Kumar Amai + */ + +#ifndef _VXD_PVDEC_PRIV_H +#define _VXD_PVDEC_PRIV_H +#include + +#include "img_dec_common.h" +#include "vxd_pvdec_regs.h" +#include "vxd_dec.h" + +#ifdef ERROR_RECOVERY_SIMULATION +/* kernel object used to debug. Declared in v4l2_int.c */ +extern struct kobject *vxd_dec_kobject; +extern int disable_fw_irq_value; +extern int g_module_irq; +#endif + +struct vxd_boot_poll_params { + unsigned int msleep_cycles; +}; + +struct vxd_ena_params { + struct vxd_boot_poll_params boot_poll; + + unsigned long fw_buf_size; + unsigned int fw_buf_virt_addr; + /* + * VXD's MMU virtual address of a firmware + * buffer. + */ + unsigned int ptd; /* Shifted physical address of PTD */ + + /* Required for firmware upload via registers. */ + struct { + const unsigned char *buf; /* Firmware blob buffer */ + + } regs_data; + + struct { + unsigned secure : 1; /* Secure flow indicator. */ + unsigned wait_dbg_fifo : 1; /* + * Indicates that fw shall use + * blocking mode when putting logs + * into debug fifo + */ + }; + + /* Structure containing memory staller configuration */ + struct { + unsigned int *data; /* Configuration data array */ + unsigned char size; /* Configuration size in dwords */ + + } mem_staller; + + unsigned int fwwdt_ms; /* Firmware software watchdog timeout value */ + + unsigned int crc; /* HW signatures to be enabled by firmware */ + unsigned int rendec_addr; /* VXD's virtual address of a rendec buffer */ + unsigned short rendec_size; /* Size of a rendec buffer in 4K pages */ +}; + +int vxd_pvdec_init(const void *dev, void __iomem *reg_base); + +int vxd_pvdec_ena(const void *dev, void __iomem *reg_base, + struct vxd_ena_params *ena_params, struct vxd_fw_hdr *hdr, + unsigned int *freq_khz); + +int vxd_pvdec_dis(const void *dev, void __iomem *reg_base); + +int vxd_pvdec_mmu_flush(const void *dev, void __iomem *reg_base); + +int vxd_pvdec_send_msg(const void *dev, void __iomem *reg_base, + unsigned int *msg, unsigned long msg_size, unsigned short msg_id, + struct vxd_dev *ctx); + +int vxd_pvdec_pend_msg_info(const void *dev, void __iomem *reg_base, + unsigned long *size, unsigned short *msg_id, + unsigned char *not_last_msg); + +int vxd_pvdec_recv_msg(const void *dev, void __iomem *reg_base, + unsigned int *buf, unsigned long buf_size, struct vxd_dev *ctx); + +int vxd_pvdec_check_fw_status(const void *dev, void __iomem *reg_base); + +unsigned long vxd_pvdec_peek_mtx_fifo(const void *dev, + void __iomem *reg_base); + +unsigned long vxd_pvdec_read_mtx_fifo(const void *dev, void __iomem *reg_base, + unsigned int *buf, unsigned long size); + +irqreturn_t vxd_pvdec_clear_int(void __iomem *reg_base, unsigned int *irq_status); + +int vxd_pvdec_check_irq(const void *dev, void __iomem *reg_base, + unsigned int irq_status); + +int vxd_pvdec_msg_fit(const void *dev, void __iomem *reg_base, + unsigned long msg_size); + +void vxd_pvdec_get_state(const void *dev, void __iomem *reg_base, + unsigned int num_pipes, struct vxd_hw_state *state); + +int vxd_pvdec_get_props(const void *dev, void __iomem *reg_base, + struct vxd_core_props *props); + +unsigned long vxd_pvdec_get_dbg_fifo_size(void __iomem *reg_base); + +int vxd_pvdec_dump_mtx_ram(const void *dev, void __iomem *reg_base, + unsigned int addr, unsigned int count, unsigned int *buf); + +int vxd_pvdec_dump_mtx_status(const void *dev, void __iomem *reg_base, + unsigned int *array, unsigned int array_size); + +#endif /* _VXD_PVDEC_PRIV_H */ diff --git a/drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h b/drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h new file mode 100644 index 000000000000..2d8cf9ef8df7 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h @@ -0,0 +1,779 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * VXD PVDEC registers header file + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Angela Stegmaier + * + * Re-written for upstreamimg + * Sidraya Jayagond + * Prashanth Kumar Amai + */ + +#ifndef VXD_PVDEC_REGS_H +#define VXD_PVDEC_REGS_H + +/* ************************* VXD-specific values *************************** */ +/* 0x10 for code, 0x18 for data. */ +#define PVDEC_MTX_CORE_MEM 0x18 +/* Iteration time out counter for MTX I/0. */ +#define PVDEC_TIMEOUT_COUNTER 1000 +/* Partial frame notification timer divider. */ +#define PVDEC_PFNT_DIV 0 +/* Value returned by register reads when HW enters invalid state (FPGA) */ +#define PVDEC_INVALID_HW_STATE 0x000dead1 + +/* Default core clock for pvdec */ +#define PVDEC_CLK_MHZ_DEFAULT 200 + +/* Offsets of registers groups within VXD. */ +#define PVDEC_PROC_OFFSET 0x0000 +/* 0x34c: Skip DMA registers when running against CSIM (vritual platform) */ +#define PVDEC_PROC_SIZE 0x34C /* 0x3FF */ + +#define PVDEC_CORE_OFFSET 0x0400 +#define PVDEC_CORE_SIZE 0x3FF + +#define MTX_CORE_OFFSET PVDEC_PROC_OFFSET +#define MTX_CORE_SIZE PVDEC_PROC_SIZE + +#define VIDEO_BUS4_MMU_OFFSET 0x1000 +#define VIDEO_BUS4_MMU_SIZE 0x1FF + +#define IMG_VIDEO_BUS4_MMU_OFFSET VIDEO_BUS4_MMU_OFFSET +#define IMG_VIDEO_BUS4_MMU_SIZE VIDEO_BUS4_MMU_SIZE + +#define VLR_OFFSET 0x2000 +#define VLR_SIZE 0x1000 + +/* PVDEC_ENTROPY defined in uapi/vxd_pvdec.h */ + +#define PVDEC_PIXEL_OFFSET 0x4000 +#define PVDEC_PIXEL_SIZE 0x1FF + +/* PVDEC_VEC_BE defined in uapi/vxd_pvdec.h */ + +/* MSVDX_VEC defined in uapi/vxd_pvdec.h */ + +#define MSVDX_VDMC_OFFSET 0x6800 +#define MSVDX_VDMC_SIZE 0x7F + +#define DMAC_OFFSET 0x6A00 +#define DMAC_SIZE 0x1FF + +#define PVDEC_TEST_OFFSET 0xFF00 +#define PVDEC_TEST_SIZE 0xFF + +/* *********************** firmware specific values ************************* */ + +/* layout of COMMS RAM */ + +#define PVDEC_FW_COMMS_HDR_SIZE 0x38 + +#define PVDEC_FW_STATUS_OFFSET 0x00 +#define PVDEC_FW_TASK_STATUS_OFFSET 0x04 +#define PVDEC_FW_ID_OFFSET 0x08 +#define PVDEC_FW_MTXPC_OFFSET 0x0c +#define PVDEC_FW_MSG_COUNTER_OFFSET 0x10 +#define PVDEC_FW_SIGNATURE_OFFSET 0x14 +#define PVDEC_FW_TO_HOST_BUF_CONF_OFFSET 0x18 +#define PVDEC_FW_TO_HOST_RD_IDX_OFFSET 0x1c +#define PVDEC_FW_TO_HOST_WR_IDX_OFFSET 0x20 +#define PVDEC_FW_TO_MTX_BUF_CONF_OFFSET 0x24 +#define PVDEC_FW_TO_MTX_RD_IDX_OFFSET 0x28 +#define PVDEC_FW_FLAGS_OFFSET 0x2c +#define PVDEC_FW_TO_MTX_WR_IDX_OFFSET 0x30 +#define PVDEC_FW_STATE_BUF_CFG_OFFSET 0x34 + +/* firmware status */ + +#define PVDEC_FW_STATUS_PANIC 0x2 +#define PVDEC_FW_STATUS_ASSERT 0x3 +#define PVDEC_FW_STATUS_SO 0x8 + +/* firmware flags */ + +#define PVDEC_FWFLAG_BIG_TO_HOST_BUFFER 0x00000002 +#define PVDEC_FWFLAG_FORCE_FS_FLOW 0x00000004 +#define PVDEC_FWFLAG_DISABLE_WATCHDOGS 0x00000008 +#define PVDEC_FWFLAG_DISABLE_AUTONOMOUS_RESET 0x00000040 +#define PVDEC_FWFLAG_DISABLE_IDLE_GPIO 0x00002000 +#define PVDEC_FWFLAG_ENABLE_ERROR_CONCEALMENT 0x00100000 +#define PVDEC_FWFLAG_DISABLE_GENC_FLUSHING 0x00800000 +#define PVDEC_FWFLAG_FAKE_COMPLETION 0x20000000 +#define PVDEC_FWFLAG_DISABLE_COREWDT_TIMERS 0x01000000 + +/* firmware message header */ + +#define PVDEC_FW_DEVA_GENMSG_OFFSET 0 + +#define PVDEC_FW_DEVA_GENMSG_MSG_ID_MASK 0xFFFF0000 +#define PVDEC_FW_DEVA_GENMSG_MSG_ID_SHIFT 16 + +#define PVDEC_FW_DEVA_GENMSG_MSG_TYPE_MASK 0xFF00 +#define PVDEC_FW_DEVA_GENMSG_MSG_TYPE_SHIFT 8 + +#define PVDEC_FW_DEVA_GENMSG_NOT_LAST_MSG_MASK 0x80 +#define PVDEC_FW_DEVA_GENMSG_NOT_LAST_MSG_SHIFT 7 + +#define PVDEC_FW_DEVA_GENMSG_MSG_SIZE_MASK 0x7F +#define PVDEC_FW_DEVA_GENMSG_MSG_SIZE_SHIFT 0 + +/* firmware init message */ + +#define PVDEC_FW_DEVA_INIT_MSG_WRDS 9 + +#define PVDEC_FW_DEVA_INIT_RENDEC_ADDR0_OFFSET 0xC + +#define PVDEC_FW_DEVA_INIT_RENDEC_SIZE0_OFFSET 0x10 +#define PVDEC_FW_DEVA_INIT_RENDEC_SIZE0_MASK 0xFFFF +#define PVDEC_FW_DEVA_INIT_RENDEC_SIZE0_SHIFT 0 + +#define PVDEC_FW_DEVA_INIT_HEVC_CFG_OFFSET 0x14 +#define PVDEC_FW_DEVA_INIT_HEVC_CFG_MAX_H_FOR_PIPE_WAIT_MASK 0xFFFF0000 +#define PVDEC_FW_DEVA_INIT_HEVC_CFG_MAX_H_FOR_PIPE_WAIT_SHIFT 16 +#define PVDEC_FW_DEVA_INIT_HEVC_CFG_MIN_H_FOR_DUAL_PIPE_MASK 0xFFFF +#define PVDEC_FW_DEVA_INIT_HEVC_CFG_MIN_H_FOR_DUAL_PIPE_SHIFT 0 + +#define PVDEC_FW_DEVA_INIT_SIG_SELECT_OFFSET 0x18 + +#define PVDEC_FW_DEVA_INIT_DBG_DELAYS_OFFSET 0x1C + +#define PVDEC_FW_DEVA_INIT_PFNT_DIV_OFFSET 0x20 +#define PVDEC_FW_DEVA_INIT_PFNT_DIV_MASK 0xFFFF0000 +#define PVDEC_FW_DEVA_INIT_PFNT_DIV_SHIFT 16 + +#define PVDEC_FW_DEVA_INIT_FWWDT_MS_OFFSET 0x20 +#define PVDEC_FW_DEVA_INIT_FWWDT_MS_MASK 0xFFFF +#define PVDEC_FW_DEVA_INIT_FWWDT_MS_SHIFT 0 + +/* firmware message types */ +#define PVDEC_FW_MSG_TYPE_PADDING 0 +#define PVDEC_FW_MSG_TYPE_INIT 0x80 + +/* miscellaneous */ + +#define PVDEC_FW_READY_SIG 0xa5a5a5a5 + +#define PVDEC_FW_COM_BUF_SIZE(cfg) ((cfg) & 0x0000ffff) +#define PVDEC_FW_COM_BUF_OFF(cfg) (((cfg) & 0xffff0000) >> 16) + +/* + * Timer divider calculation macro. + * NOTE: The Timer divider is only 8bit field + * so we set it for 2MHz timer base to cover wider + * range of core frequencies on real platforms (freq > 255MHz) + */ +#define PVDEC_CALC_TIMER_DIV(val) (((val) - 1) / 2) + +#define MTX_CORE_STATUS_ELEMENTS 4 + +#define PVDEC_CORE_MEMSTALLER_ELEMENTS 7 + +/* ********************** PVDEC_CORE registers group ************************ */ + +/* register PVDEC_SOFT_RESET */ +#define PVDEC_CORE_PVDEC_SOFT_RST_OFFSET 0x0000 + +#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_PIXEL_PROC_SOFT_RST_MASK 0xFF000000 +#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_PIXEL_PROC_SOFT_RST_SHIFT 24 + +#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_ENTROPY_SOFT_RST_MASK 0x00FF0000 +#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_ENTROPY_SOFT_RST_SHIFT 16 + +#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_MMU_SOFT_RST_MASK 0x00000002 +#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_MMU_SOFT_RST_SHIFT 1 + +#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_SOFT_RST_MASK 0x00000001 +#define PVDEC_CORE_PVDEC_SOFT_RST_PVDEC_SOFT_RST_SHIFT 0 + +/* register PVDEC_HOST_INTERRUPT_STATUS */ +#define PVDEC_CORE_PVDEC_INT_STAT_OFFSET 0x0010 + +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_SYS_WDT_MASK 0x10000000 +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_SYS_WDT_SHIFT 28 + +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_READ_TIMEOUT_PROC_IRQ_MASK 0x08000000 +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_READ_TIMEOUT_PROC_IRQ_SHIFT 27 + +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_COMMAND_TIMEOUT_PROC_IRQ_MASK 0x04000000 +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_COMMAND_TIMEOUT_PROC_IRQ_SHIFT 26 + +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_READ_TIMEOUT_HOST_IRQ_MASK 0x02000000 +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_READ_TIMEOUT_HOST_IRQ_SHIFT 25 + +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_COMMAND_TIMEOUT_HOST_IRQ_MASK 0x01000000 +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_COMMAND_TIMEOUT_HOST_IRQ_SHIFT 24 + +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PROC_GPIO_IRQ_MASK 0x00200000 +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PROC_GPIO_IRQ_SHIFT 21 + +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PROC_IRQ_MASK 0x00100000 +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PROC_IRQ_SHIFT 20 + +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_MASK 0x00010000 +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_MMU_FAULT_IRQ_SHIFT 16 + +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PIXEL_PROCESSING_IRQ_MASK 0x0000FF00 +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_PIXEL_PROCESSING_IRQ_SHIFT 8 + +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_ENTROPY_PIPE_IRQ_MASK 0x000000FF +#define PVDEC_CORE_PVDEC_INT_STAT_HOST_ENTROPY_PIPE_IRQ_SHIFT 0 + +/* register PVDEC_INTERRUPT_CLEAR */ +#define PVDEC_CORE_PVDEC_INT_CLEAR_OFFSET 0x0014 + +#define PVDEC_CORE_PVDEC_INT_CLEAR_IRQ_CLEAR_MASK 0xFFFF0000 +#define PVDEC_CORE_PVDEC_INT_CLEAR_IRQ_CLEAR_SHIFT 16 + +/* register PVDEC_HOST_INTERRUPT_ENABLE */ +#define PVDEC_CORE_PVDEC_HOST_INT_ENA_OFFSET 0x0018 + +#define PVDEC_CORE_PVDEC_HOST_INT_ENA_HOST_IRQ_ENABLE_MASK 0xFFFF0000 +#define PVDEC_CORE_PVDEC_HOST_INT_ENA_HOST_IRQ_ENABLE_SHIFT 16 + +/* Register PVDEC_MAN_CLK_ENABLE */ +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_OFFSET 0x0040 + +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PIXEL_PROC_MAN_CLK_ENA_MASK 0xFF000000 +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PIXEL_PROC_MAN_CLK_ENA_SHIFT 24 + +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_ENTROPY_PIPE_MAN_CLK_ENA_MASK 0x00FF0000 +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_ENTROPY_PIPE_MAN_CLK_ENA_SHIFT 16 + +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_MEM_MAN_CLK_ENA_MASK 0x00000100 +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_MEM_MAN_CLK_ENA_SHIFT 8 + +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PVDEC_REG_MAN_CLK_ENA_MASK 0x00000010 +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PVDEC_REG_MAN_CLK_ENA_SHIFT 4 + +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PROC_MAN_CLK_ENA_MASK 0x00000002 +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_PROC_MAN_CLK_ENA_SHIFT 1 + +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_CORE_MAN_CLK_ENA_MASK 0x00000001 +#define PVDEC_CORE_PVDEC_MAN_CLK_ENA_CORE_MAN_CLK_ENA_SHIFT 0 + +/* register PVDEC_HOST_PIPE_SELECT */ +#define PVDEC_CORE_PVDEC_HOST_PIPE_SELECT_OFFSET 0x0060 + +#define PVDEC_CORE_PVDEC_HOST_PIPE_SELECT_PIPE_SEL_MASK 0x0000000F +#define PVDEC_CORE_PVDEC_HOST_PIPE_SELECT_PIPE_SEL_SHIFT 0 + +/* register PROC_DEBUG */ +#define PVDEC_CORE_PROC_DEBUG_OFFSET 0x0100 + +#define PVDEC_CORE_PROC_DEBUG_MTX_LAST_RAM_BANK_SIZE_MASK 0xFF000000 +#define PVDEC_CORE_PROC_DEBUG_MTX_LAST_RAM_BANK_SIZE_SHIFT 24 + +#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_BANK_SIZE_MASK 0x000F0000 +#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_BANK_SIZE_SHIFT 16 + +#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_BANKS_MASK 0x00000F00 +#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_BANKS_SHIFT 8 + +#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_NEW_REPRESENTATION_MASK 0x00000080 +#define PVDEC_CORE_PROC_DEBUG_MTX_RAM_NEW_REPRESENTATION_SHIFT 7 + +#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_GPIO_OUT_MASK 0x00000018 +#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_GPIO_OUT_SHIFT 3 + +#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_IS_SLAVE_MASK 0x00000004 +#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_IS_SLAVE_SHIFT 2 + +#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_GPIO_IN_MASK 0x00000003 +#define PVDEC_CORE_PROC_DEBUG_PROC_DBG_GPIO_IN_SHIFT 0 + +/* register PROC_DMAC_CONTROL */ +#define PVDEC_CORE_PROC_DMAC_CONTROL_OFFSET 0x0104 + +#define PVDEC_CORE_PROC_DMAC_CONTROL_BOOT_ON_DMA_CH0_MASK 0x80000000 +#define PVDEC_CORE_PROC_DMAC_CONTROL_BOOT_ON_DMA_CH0_SHIFT 31 + +/* register PROC_DEBUG_FIFO */ +#define PVDEC_CORE_PROC_DBG_FIFO_OFFSET 0x0108 + +#define PVDEC_CORE_PROC_DBG_FIFO_PROC_DBG_FIFO_MASK 0xFFFFFFFF +#define PVDEC_CORE_PROC_DBG_FIFO_PROC_DBG_FIFO_SHIFT 0 + +/* register PROC_DEBUG_FIFO_CTRL_0 */ +#define PVDEC_CORE_PROC_DBG_FIFO_CTRL0_OFFSET 0x010C + +#define PVDEC_CORE_PROC_DBG_FIFO_CTRL0_PROC_DBG_FIFO_COUNT_MASK 0xFFFF0000 +#define PVDEC_CORE_PROC_DBG_FIFO_CTRL0_PROC_DBG_FIFO_COUNT_SHIFT 16 + +#define PVDEC_CORE_PROC_DBG_FIFO_CTRL0_PROC_DBG_FIFO_SIZE_MASK 0x0000FFFF +#define PVDEC_CORE_PROC_DBG_FIFO_CTRL0_PROC_DBG_FIFO_SIZE_SHIFT 0 + +/* register PVDEC_CORE_ID */ +#define PVDEC_CORE_PVDEC_CORE_ID_OFFSET 0x0230 + +#define PVDEC_CORE_PVDEC_CORE_ID_GROUP_ID_MASK 0xFF000000 +#define PVDEC_CORE_PVDEC_CORE_ID_GROUP_ID_SHIFT 24 + +#define PVDEC_CORE_PVDEC_CORE_ID_CORE_ID_MASK 0x00FF0000 +#define PVDEC_CORE_PVDEC_CORE_ID_CORE_ID_SHIFT 16 + +#define PVDEC_CORE_PVDEC_CORE_ID_PVDEC_CORE_CONFIG_MASK 0x0000FFFF +#define PVDEC_CORE_PVDEC_CORE_ID_PVDEC_CORE_CONFIG_SHIFT 0 + +#define PVDEC_CORE_PVDEC_CORE_ID_ENT_PIPES_MASK 0x0000000F +#define PVDEC_CORE_PVDEC_CORE_ID_ENT_PIPES_SHIFT 0 + +#define PVDEC_CORE_PVDEC_CORE_ID_PIX_PIPES_MASK 0x000000F0 +#define PVDEC_CORE_PVDEC_CORE_ID_PIX_PIPES_SHIFT 4 + +/* register PVDEC_CORE_REV */ +#define PVDEC_CORE_PVDEC_CORE_REV_OFFSET 0x0240 + +#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_DESIGNER_MASK 0xFF000000 +#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_DESIGNER_SHIFT 24 + +#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MAJOR_REV_MASK 0x00FF0000 +#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MAJOR_REV_SHIFT 16 + +#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MINOR_REV_MASK 0x0000FF00 +#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MINOR_REV_SHIFT 8 + +#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MAINT_REV_MASK 0x000000FF +#define PVDEC_CORE_PVDEC_CORE_REV_PVDEC_MAINT_REV_SHIFT 0 + +/* *********************** MTX_CORE registers group ************************* */ + +/* register MTX_ENABLE */ +#define MTX_CORE_MTX_ENABLE_OFFSET 0x0000 + +/* register MTX_SYSC_TXTIMER. Note: it's not defined in PVDEC TRM. */ +#define MTX_CORE_MTX_SYSC_TXTIMER_OFFSET 0x0010 + +/* register MTX_KICKI */ +#define MTX_CORE_MTX_KICKI_OFFSET 0x0088 + +#define MTX_CORE_MTX_KICKI_MTX_KICKI_MASK 0x0000FFFF +#define MTX_CORE_MTX_KICKI_MTX_KICKI_SHIFT 0 + +/* register MTX_FAULT0 */ +#define MTX_CORE_MTX_FAULT0_OFFSET 0x0090 + +/* register MTX_REGISTER_READ_WRITE_DATA */ +#define MTX_CORE_MTX_REG_READ_WRITE_DATA_OFFSET 0x00F8 + +/* register MTX_REGISTER_READ_WRITE_REQUEST */ +#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_OFFSET 0x00FC + +#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_DREADY_MASK 0x80000000 +#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_DREADY_SHIFT 31 + +#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_RNW_MASK 0x00010000 +#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_RNW_SHIFT 16 + +#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_RSPECIFIER_MASK 0x00000070 +#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_RSPECIFIER_SHIFT 4 + +#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_USPECIFIER_MASK 0x0000000F +#define MTX_CORE_MTX_REG_READ_WRITE_REQUEST_MTX_USPECIFIER_SHIFT 0 + +/* register MTX_RAM_ACCESS_DATA_EXCHANGE */ +#define MTX_CORE_MTX_RAM_ACCESS_DATA_EXCHANGE_OFFSET 0x0100 + +/* register MTX_RAM_ACCESS_DATA_TRANSFER */ +#define MTX_CORE_MTX_RAM_ACCESS_DATA_TRANSFER_OFFSET 0x0104 + +/* register MTX_RAM_ACCESS_CONTROL */ +#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_OFFSET 0x0108 + +#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMID_MASK 0x0FF00000 +#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMID_SHIFT 20 + +#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCM_ADDR_MASK 0x000FFFFC +#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCM_ADDR_SHIFT 2 + +#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMAI_MASK 0x00000002 +#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMAI_SHIFT 1 + +#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMR_MASK 0x00000001 +#define MTX_CORE_MTX_RAM_ACCESS_CONTROL_MTX_MCMR_SHIFT 0 + +/* register MTX_RAM_ACCESS_STATUS */ +#define MTX_CORE_MTX_RAM_ACCESS_STATUS_OFFSET 0x010C + +#define MTX_CORE_MTX_RAM_ACCESS_STATUS_MTX_MTX_MCM_STAT_MASK 0x00000001 +#define MTX_CORE_MTX_RAM_ACCESS_STATUS_MTX_MTX_MCM_STAT_SHIFT 0 + +/* register MTX_SOFT_RESET */ +#define MTX_CORE_MTX_SOFT_RESET_OFFSET 0x0200 + +#define MTX_CORE_MTX_SOFT_RESET_MTX_RESET_MASK 0x00000001 +#define MTX_CORE_MTX_SOFT_RESET_MTX_RESET_SHIFT 0 + +/* register MTX_SYSC_TIMERDIV */ +#define MTX_CORE_MTX_SYSC_TIMERDIV_OFFSET 0x0208 + +#define MTX_CORE_MTX_SYSC_TIMERDIV_TIMER_EN_MASK 0x00010000 +#define MTX_CORE_MTX_SYSC_TIMERDIV_TIMER_EN_SHIFT 16 + +#define MTX_CORE_MTX_SYSC_TIMERDIV_TIMER_DIV_MASK 0x000000FF +#define MTX_CORE_MTX_SYSC_TIMERDIV_TIMER_DIV_SHIFT 0 + +/* register MTX_SYSC_CDMAA */ +#define MTX_CORE_MTX_SYSC_CDMAA_OFFSET 0x0344 + +#define MTX_CORE_MTX_SYSC_CDMAA_CDMAA_ADDRESS_MASK 0x03FFFFFC +#define MTX_CORE_MTX_SYSC_CDMAA_CDMAA_ADDRESS_SHIFT 2 + +/* register MTX_SYSC_CDMAC */ +#define MTX_CORE_MTX_SYSC_CDMAC_OFFSET 0x0340 + +#define MTX_CORE_MTX_SYSC_CDMAC_BURSTSIZE_MASK 0x07000000 +#define MTX_CORE_MTX_SYSC_CDMAC_BURSTSIZE_SHIFT 24 + +#define MTX_CORE_MTX_SYSC_CDMAC_RNW_MASK 0x00020000 +#define MTX_CORE_MTX_SYSC_CDMAC_RNW_SHIFT 17 + +#define MTX_CORE_MTX_SYSC_CDMAC_ENABLE_MASK 0x00010000 +#define MTX_CORE_MTX_SYSC_CDMAC_ENABLE_SHIFT 16 + +#define MTX_CORE_MTX_SYSC_CDMAC_LENGTH_MASK 0x0000FFFF +#define MTX_CORE_MTX_SYSC_CDMAC_LENGTH_SHIFT 0 + +/* register MTX_SYSC_CDMAT */ +#define MTX_CORE_MTX_SYSC_CDMAT_OFFSET 0x0350 + +/* ****************** IMG_VIDEO_BUS4_MMU registers group ******************** */ + +/* register MMU_CONTROL0_ */ +#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL0_USE_TILE_STRIDE_PER_CTX_MASK 0x00010000 +#define IMG_VIDEO_BUS4_MMU_MMU_CONTROL0_USE_TILE_STRIDE_PER_CTX_SHIFT 16 + +#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_MMU_ENA_EXT_ADDR_MASK 0x00000010 +#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_MMU_ENA_EXT_ADDR_SHIFT 4 + +#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_UPPER_ADDR_FIXED_MASK 0x00FF0000 +#define IMG_VIDEO_BUS4_MMU_MMU_ADDRESS_CONTROL_UPPER_ADDR_FIXED_SHIFT 16 + +#define IMG_VIDEO_BUS4_MMU_MMU_MEM_EXT_OUTSTANDING_READ_WORDS_MASK 0x0000FFFF +#define IMG_VIDEO_BUS4_MMU_MMU_MEM_EXT_OUTSTANDING_READ_WORDS_SHIFT 0 + +/* *************************** MMU-related values ************************** */ + +/* MMU page size */ + +enum { + VXD_MMU_SOFT_PAGE_SIZE_PAGE_64K = 0x4, + VXD_MMU_SOFT_PAGE_SIZE_PAGE_16K = 0x2, + VXD_MMU_SOFT_PAGE_SIZE_PAGE_4K = 0x0, + VXD_MMU_SOFT_PAGE_SIZE_FORCE32BITS = 0x7FFFFFFFU +}; + +/* MMU PTD entry flags */ +enum { + VXD_MMU_PTD_FLAG_NONE = 0x0, + VXD_MMU_PTD_FLAG_VALID = 0x1, + VXD_MMU_PTD_FLAG_WRITE_ONLY = 0x2, + VXD_MMU_PTD_FLAG_READ_ONLY = 0x4, + VXD_MMU_PTD_FLAG_CACHE_COHERENCY = 0x8, + VXD_MMU_PTD_FLAG_FORCE32BITS = 0x7FFFFFFFU +}; + +/* ********************* PVDEC_PIXEL registers group *********************** */ + +/* register PVDEC_PIXEL_PIXEL_CONTROL_0 */ +#define PVDEC_PIXEL_PIXEL_CONTROL_0_OFFSET 0x0004 + +#define PVDEC_PIXEL_PIXEL_CONTROL_0_DMAC_CH_SEL_FOR_MTX_MASK 0x0000000E +#define PVDEC_PIXEL_PIXEL_CONTROL_0_DMAC_CH_SEL_FOR_MTX_SHIFT 1 + +#define PVDEC_PIXEL_PIXEL_CONTROL_0_PROC_DMAC_CH0_SEL_MASK 0x00000001 +#define PVDEC_PIXEL_PIXEL_CONTROL_0_PROC_DMAC_CH0_SEL_SHIFT 0 + +/* register PVDEC_PIXEL_MAN_CLK_ENABLE */ +#define PVDEC_PIXEL_PIXEL_MAN_CLK_ENA_OFFSET 0x0020 + +#define PVDEC_PIXEL_PIXEL_MAN_CLK_ENA_PIXEL_REG_MAN_CLK_ENA_MASK 0x00020000 +#define PVDEC_PIXEL_PIXEL_MAN_CLK_ENA_PIXEL_REG_MAN_CLK_ENA_SHIFT 17 + +#define PVDEC_PIXEL_PIXEL_MAN_CLK_ENA_PIXEL_DMAC_MAN_CLK_ENA_MASK 0x00010000 +#define PVDEC_PIXEL_PIXEL_MAN_CLK_ENA_PIXEL_DMAC_MAN_CLK_ENA_SHIFT 16 + +/* register PIXEL_PIPE_CONFIG */ +#define PVDEC_PIXEL_PIXEL_PIPE_CONFIG_OFFSET 0x00C0 + +/* register PIXEL_MISC_CONFIG */ +#define PVDEC_PIXEL_PIXEL_MISC_CONFIG_OFFSET 0x00C4 + +/* register MAX_FRAME_CONFIG */ +#define PVDEC_PIXEL_MAX_FRAME_CONFIG_OFFSET 0x00C8 + +/* ********************* PVDEC_ENTROPY registers group ********************* */ + +/* Register PVDEC_ENTROPY_MAN_CLK_ENABLE */ +#define PVDEC_ENTROPY_ENTROPY_MAN_CLK_ENA_OFFSET 0x0020 + +/* Register PVDEC_ENTROPY_LAST_LAST_MB */ +#define PVDEC_ENTROPY_ENTROPY_LAST_MB_OFFSET 0x00BC + +/* ********************* PVDEC_VEC_BE registers group ********************** */ + +/* Register PVDEC_VEC_BE_VEC_BE_STATUS */ +#define PVDEC_VEC_BE_VEC_BE_STATUS_OFFSET 0x0018 + +/* ********************* MSVDX_VEC registers group ************************* */ + +/* Register MSVDX_VEC_VEC_ENTDEC_INFORMATION */ +#define MSVDX_VEC_VEC_ENTDEC_INFORMATION_OFFSET 0x00AC + +/* ********************* MSVDX_VDMC registers group ************************ */ + +/* Register MSVDX_VDMC_VDMC_MACROBLOCK_NUMBER */ +#define MSVDX_VDMC_VDMC_MACROBLOCK_NUMBER_OFFSET 0x0048 + +/* ************************** DMAC registers group ************************* */ + +/* register DMAC_SETUP */ +#define DMAC_DMAC_SETUP_OFFSET 0x0000 +#define DMAC_DMAC_SETUP_STRIDE 32 +#define DMAC_DMAC_SETUP_NO_ENTRIES 6 + +/* register DMAC_COUNT */ +#define DMAC_DMAC_COUNT_OFFSET 0x0004 +#define DMAC_DMAC_COUNT_STRIDE 32 +#define DMAC_DMAC_COUNT_NO_ENTRIES 6 + +#define DMAC_DMAC_COUNT_LIST_IEN_MASK 0x80000000 +#define DMAC_DMAC_COUNT_LIST_IEN_SHIFT 31 + +#define DMAC_DMAC_COUNT_BSWAP_MASK 0x40000000 +#define DMAC_DMAC_COUNT_BSWAP_SHIFT 30 + +#define DMAC_DMAC_COUNT_TRANSFER_IEN_MASK 0x20000000 +#define DMAC_DMAC_COUNT_TRANSFER_IEN_SHIFT 29 + +#define DMAC_DMAC_COUNT_PW_MASK 0x18000000 +#define DMAC_DMAC_COUNT_PW_SHIFT 27 + +#define DMAC_DMAC_COUNT_DIR_MASK 0x04000000 +#define DMAC_DMAC_COUNT_DIR_SHIFT 26 + +#define DMAC_DMAC_COUNT_PI_MASK 0x03000000 +#define DMAC_DMAC_COUNT_PI_SHIFT 24 + +#define DMAC_DMAC_COUNT_LIST_FIN_CTL_MASK 0x00400000 +#define DMAC_DMAC_COUNT_LIST_FIN_CTL_SHIFT 22 + +#define DMAC_DMAC_COUNT_DREQ_MASK 0x00100000 +#define DMAC_DMAC_COUNT_DREQ_SHIFT 20 + +#define DMAC_DMAC_COUNT_SRST_MASK 0x00080000 +#define DMAC_DMAC_COUNT_SRST_SHIFT 19 + +#define DMAC_DMAC_COUNT_LIST_EN_MASK 0x00040000 +#define DMAC_DMAC_COUNT_LIST_EN_SHIFT 18 + +#define DMAC_DMAC_COUNT_ENABLE_2D_MODE_MASK 0x00020000 +#define DMAC_DMAC_COUNT_ENABLE_2D_MODE_SHIFT 17 + +#define DMAC_DMAC_COUNT_EN_MASK 0x00010000 +#define DMAC_DMAC_COUNT_EN_SHIFT 16 + +#define DMAC_DMAC_COUNT_CNT_MASK 0x0000FFFF +#define DMAC_DMAC_COUNT_CNT_SHIFT 0 + +/* register DMAC_PERIPH */ +#define DMAC_DMAC_PERIPH_OFFSET 0x0008 +#define DMAC_DMAC_PERIPH_STRIDE 32 +#define DMAC_DMAC_PERIPH_NO_ENTRIES 6 + +#define DMAC_DMAC_PERIPH_ACC_DEL_MASK 0xE0000000 +#define DMAC_DMAC_PERIPH_ACC_DEL_SHIFT 29 + +#define DMAC_DMAC_PERIPH_INCR_MASK 0x08000000 +#define DMAC_DMAC_PERIPH_INCR_SHIFT 27 + +#define DMAC_DMAC_PERIPH_BURST_MASK 0x07000000 +#define DMAC_DMAC_PERIPH_BURST_SHIFT 24 + +#define DMAC_DMAC_PERIPH_EXT_BURST_MASK 0x000F0000 +#define DMAC_DMAC_PERIPH_EXT_BURST_SHIFT 16 + +#define DMAC_DMAC_PERIPH_EXT_SA_MASK 0x0000000F +#define DMAC_DMAC_PERIPH_EXT_SA_SHIFT 0 + +/* register DMAC_IRQ_STAT */ +#define DMAC_DMAC_IRQ_STAT_OFFSET 0x000C +#define DMAC_DMAC_IRQ_STAT_STRIDE 32 +#define DMAC_DMAC_IRQ_STAT_NO_ENTRIES 6 + +/* register DMAC_PERIPHERAL_ADDR */ +#define DMAC_DMAC_PERIPH_ADDR_OFFSET 0x0014 +#define DMAC_DMAC_PERIPH_ADDR_STRIDE 32 +#define DMAC_DMAC_PERIPH_ADDR_NO_ENTRIES 6 + +#define DMAC_DMAC_PERIPH_ADDR_ADDR_MASK 0x007FFFFF +#define DMAC_DMAC_PERIPH_ADDR_ADDR_SHIFT 0 + +/* register DMAC_PER_HOLD */ +#define DMAC_DMAC_PER_HOLD_OFFSET 0x0018 +#define DMAC_DMAC_PER_HOLD_STRIDE 32 +#define DMAC_DMAC_PER_HOLD_NO_ENTRIES 6 + +#define DMAC_DMAC_PER_HOLD_PER_HOLD_MASK 0x0000001F +#define DMAC_DMAC_PER_HOLD_PER_HOLD_SHIFT 0 + +#define DMAC_DMAC_SOFT_RESET_OFFSET 0x00C0 + +/* ************************** DMAC-related values *************************** */ + +/* + * This type defines whether the peripheral address is static or + * auto-incremented. (see the TRM "Transfer Sequence Linked-list - INCR") + */ +enum { + DMAC_INCR_OFF = 0, /* No action, no increment. */ + DMAC_INCR_ON = 1, /* Generate address increment. */ + DMAC_INCR_FORCE32BITS = 0x7FFFFFFFU +}; + +/* Burst size settings (see the TRM "Transfer Sequence Linked-list - BURST"). */ +enum { + DMAC_BURST_0 = 0x0, /* burst size of 0 */ + DMAC_BURST_1 = 0x1, /* burst size of 1 */ + DMAC_BURST_2 = 0x2, /* burst size of 2 */ + DMAC_BURST_3 = 0x3, /* burst size of 3 */ + DMAC_BURST_4 = 0x4, /* burst size of 4 */ + DMAC_BURST_5 = 0x5, /* burst size of 5 */ + DMAC_BURST_6 = 0x6, /* burst size of 6 */ + DMAC_BURST_7 = 0x7, /* burst size of 7 */ + DMAC_BURST_8 = 0x8, /* burst size of 8 */ + DMAC_BURST_FORCE32BITS = 0x7FFFFFFFU +}; + +/* + * Extended burst size settings (see TRM "Transfer Sequence Linked-list - + * EXT_BURST"). + */ +enum { + DMAC_EXT_BURST_0 = 0x0, /* no extension */ + DMAC_EXT_BURST_1 = 0x1, /* extension of 8 */ + DMAC_EXT_BURST_2 = 0x2, /* extension of 16 */ + DMAC_EXT_BURST_3 = 0x3, /* extension of 24 */ + DMAC_EXT_BURST_4 = 0x4, /* extension of 32 */ + DMAC_EXT_BURST_5 = 0x5, /* extension of 40 */ + DMAC_EXT_BURST_6 = 0x6, /* extension of 48 */ + DMAC_EXT_BURST_7 = 0x7, /* extension of 56 */ + DMAC_EXT_BURST_8 = 0x8, /* extension of 64 */ + DMAC_EXT_BURST_9 = 0x9, /* extension of 72 */ + DMAC_EXT_BURST_10 = 0xa, /* extension of 80 */ + DMAC_EXT_BURST_11 = 0xb, /* extension of 88 */ + DMAC_EXT_BURST_12 = 0xc, /* extension of 96 */ + DMAC_EXT_BURST_13 = 0xd, /* extension of 104 */ + DMAC_EXT_BURST_14 = 0xe, /* extension of 112 */ + DMAC_EXT_BURST_15 = 0xf, /* extension of 120 */ + DMAC_EXT_BURST_FORCE32BITS = 0x7FFFFFFFU +}; + +/* Transfer direction. */ +enum { + DMAC_MEM_TO_VXD = 0x0, + DMAC_VXD_TO_MEM = 0x1, + DMAC_VXD_TO_FORCE32BITS = 0x7FFFFFFFU +}; + +/* How much to increment the peripheral address. */ +enum { + DMAC_PI_1 = 0x2, /* increment by 1 */ + DMAC_PI_2 = 0x1, /* increment by 2 */ + DMAC_PI_4 = 0x0, /* increment by 4 */ + DMAC_PI_FORCE32BITS = 0x7FFFFFFFU +}; + +/* Peripheral width settings (see TRM "Transfer Sequence Linked-list - PW"). */ +enum { + DMAC_PWIDTH_32_BIT = 0x0, /* Peripheral width 32-bit. */ + DMAC_PWIDTH_16_BIT = 0x1, /* Peripheral width 16-bit. */ + DMAC_PWIDTH_8_BIT = 0x2, /* Peripheral width 8-bit. */ + DMAC_PWIDTH_FORCE32BITS = 0x7FFFFFFFU +}; + +/* ******************************* macros ********************************** */ + +#ifdef PVDEC_SINGLETHREADED_IO +/* Write to the register */ +#define VXD_WR_REG_ABS(base, addr, val) \ + ({ spin_lock_irqsave(&pvdec_irq_lock, pvdec_irq_flags); \ + iowrite32((val), (addr) + (base)); \ + spin_unlock_irqrestore(&pvdec_irq_lock, (unsigned long)pvdec_irq_flags); }) + +/* Read the register */ +#define VXD_RD_REG_ABS(base, addr) \ + ({ unsigned int reg; \ + spin_lock_irqsave(&pvdec_irq_lock, pvdec_irq_flags); \ + reg = ioread32((addr) + (base)); \ + spin_unlock_irqrestore(&pvdec_irq_lock, (unsigned long)pvdec_irq_flags); \ + reg; }) +#else /* ndef PVDEC_SINGLETHREADED_IO */ + +/* Write to the register */ +#define VXD_WR_REG_ABS(base, addr, val) \ + (iowrite32((val), (addr) + (base))) + +/* Read the register */ +#define VXD_RD_REG_ABS(base, addr) \ + (ioread32((addr) + (base))) + +#endif + +/* Get offset of a register */ +#define VXD_GET_REG_OFF(group, reg) \ + (group ## _OFFSET + group ## _ ## reg ## _OFFSET) + +/* Get offset of a repated register */ +#define VXD_GET_RPT_REG_OFF(group, reg, index) \ + (VXD_GET_REG_OFF(group, reg) + ((index) * group ## _ ## reg ## _STRIDE)) + +/* Extract field from a register */ +#define VXD_RD_REG_FIELD(val, group, reg, field) \ + (((val) & group ## _ ## reg ## _ ## field ## _MASK) >> \ + group ## _ ## reg ## _ ## field ## _SHIFT) + +/* Shift provided value by number of bits relevant to register specification */ +#define VXD_ENC_REG_FIELD(group, reg, field, val) \ + ((unsigned int)(val) << (group ## _ ## reg ## _ ## field ## _SHIFT)) + +/* Update the field in a register */ +#define VXD_WR_REG_FIELD(reg_val, group, reg, field, val) \ + (((reg_val) & ~(group ## _ ## reg ## _ ## field ## _MASK)) | \ + (VXD_ENC_REG_FIELD(group, reg, field, val) & \ + (group ## _ ## reg ## _ ## field ## _MASK))) + +/* Write to a register */ +#define VXD_WR_REG(base, group, reg, val) \ + VXD_WR_REG_ABS(base, VXD_GET_REG_OFF(group, reg), val) + +/* Write to a repeated register */ +#define VXD_WR_RPT_REG(base, group, reg, val, index) \ + VXD_WR_REG_ABS(base, VXD_GET_RPT_REG_OFF(group, reg, index), val) + +/* Read a register */ +#define VXD_RD_REG(base, group, reg) \ + VXD_RD_REG_ABS(base, VXD_GET_REG_OFF(group, reg)) + +/* Read a repeated register */ +#define VXD_RD_RPT_REG(base, group, reg, index) \ + VXD_RD_REG_ABS(base, VXD_GET_RPT_REG_OFF(group, reg, index)) + +/* Insert word into the message buffer */ +#define VXD_WR_MSG_WRD(buf, msg_type, wrd, val) \ + (((unsigned int *)buf)[(msg_type ## _ ## wrd ## _OFFSET) / sizeof(unsigned int)] = \ + val) + +/* Get a word from the message buffer */ +#define VXD_RD_MSG_WRD(buf, msg_type, wrd) \ + (((unsigned int *)buf)[(msg_type ## _ ## wrd ## _OFFSET) / sizeof(unsigned int)]) + +/* Get offset for pipe register */ +#define VXD_GET_PIPE_OFF(num_pipes, pipe) \ + ((num_pipes) > 1 ? ((pipe) << 16) : 0) + +#endif /* VXD_PVDEC_REGS_H */ From patchwork Wed Aug 18 14:10:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD28BC432BE for ; Wed, 18 Aug 2021 14:12:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C2A4D610CB for ; Wed, 18 Aug 2021 14:12:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239148AbhHRONH (ORCPT ); Wed, 18 Aug 2021 10:13:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239152AbhHROM7 (ORCPT ); Wed, 18 Aug 2021 10:12:59 -0400 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F432C0613A4 for ; Wed, 18 Aug 2021 07:12:13 -0700 (PDT) Received: by mail-pg1-x532.google.com with SMTP id y23so2363868pgi.7 for ; Wed, 18 Aug 2021 07:12:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9WlfM7kZF/JezkcppnxVCfS/bkuX2OvnbHl3e2vQcM0=; b=Yx1RCDJ/s0Nr35sMLkuVoVvGjXvpPIfhtMn2wdwZ/I2lGmYPJN6UNDuEBnMvln8TWn EayCkvOHWCc7AENBttSasihFW9GNTSnW7UIkmJmOqBtEgTKQyAdmPJNGC/8Wkjkm6DKQ 1RPr1CSaLfpacT8WKdOyyNNPjiCn2dyf6jitfneoOvhzlBh8oKbX+MzFAlJC0DUXOVG5 EKH2sC76SydC+BI/AeBa0joNYT3EcXRVgMaWAOlP109/PEzMCkM/arebbZulMK61dUXy lDjscpNgByr4D+QCQc+hNd6mSnMLq2nT0jO+yDQzQ1OL5b0umJDJwqHU9innOTPhX8Tv ABTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=9WlfM7kZF/JezkcppnxVCfS/bkuX2OvnbHl3e2vQcM0=; b=g95WpO/jyY5wG5BXoXXO/O4rGwGF255P3LYxMEBBynwRueqmIeDhkYH7hHexQP4+rH kxPSgu0cNsRnpIINOS15JKlnS4xdWIdjzUeyakBvOW9/eyUS19lpDDCZck9rHS1rhEwh uQ4iotVtFE1WY/HrE8kqKrgpjkLmoz1cFHPyAgDjo6wzts0pzlvxRDgsRhW+k0THUCyX RsaIfGUUmuerknaFv1r2x58jOf4BLb2Xyr200RuqgoJzKxDxSCrbQ93NIQ5AjoJGCZbV /3NTROkdlPPXbDSHWvG9cH4b1Z8Bnb/fA2rXcLb7d+yb9OV05QxHZmIyKV3fn43F7n5K /BtA== MIME-Version: 1.0 X-Gm-Message-State: AOAM530bDceWdQCrjy6918VmZ9mwaJC70/b7BCc6VBqv57qOHBonRVGg VN+/dErASnqUr66V0taCarCY2X0mxmYnmX4nFm3lPk1ZoJnNk6JbtAUt6cOeK9KfV7GtxclzSpH G1pRPxB3c+RCuJSCX X-Google-Smtp-Source: ABdhPJylu6nx/ruZSKzcF99+FnOeovG7f4T9R0tnbLGes0y/CjkVf2/eLTpD+E/3FrhI7klKtdEpcA== X-Received: by 2002:a63:1358:: with SMTP id 24mr4312089pgt.327.1629295932908; Wed, 18 Aug 2021 07:12:12 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.12.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:12:12 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 06/30] v4l: vxd-dec: Add hardware control modules Date: Wed, 18 Aug 2021 19:40:13 +0530 Message-Id: <20210818141037.19990-7-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya The TI Video Decoder uses IMG D5520 to provide video decoding for H.264 codec and this patch handles firmware messages transaction with firmware. It prepares the batch and fragment messages for firmware. Signed-off-by: Amit Makani Signed-off-by: Sidraya --- MAINTAINERS | 2 + .../staging/media/vxd/decoder/hw_control.c | 1211 +++++++++++++++++ .../staging/media/vxd/decoder/hw_control.h | 144 ++ 3 files changed, 1357 insertions(+) create mode 100644 drivers/staging/media/vxd/decoder/hw_control.c create mode 100644 drivers/staging/media/vxd/decoder/hw_control.h diff --git a/MAINTAINERS b/MAINTAINERS index 47067f907539..2327ea12caa6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19542,6 +19542,8 @@ F: drivers/staging/media/vxd/common/img_mem_man.h F: drivers/staging/media/vxd/common/img_mem_unified.c F: drivers/staging/media/vxd/common/imgmmu.c F: drivers/staging/media/vxd/common/imgmmu.h +F: drivers/staging/media/vxd/decoder/hw_control.c +F: drivers/staging/media/vxd/decoder/hw_control.h F: drivers/staging/media/vxd/decoder/img_dec_common.h F: drivers/staging/media/vxd/decoder/vxd_core.c F: drivers/staging/media/vxd/decoder/vxd_dec.c diff --git a/drivers/staging/media/vxd/decoder/hw_control.c b/drivers/staging/media/vxd/decoder/hw_control.c new file mode 100644 index 000000000000..049d9bbcd52c --- /dev/null +++ b/drivers/staging/media/vxd/decoder/hw_control.c @@ -0,0 +1,1211 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * VXD DEC Hardware control implementation + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + * Prashanth Kumar Amai + */ + +#include +#include +#include +#include +#include + +#include "decoder.h" +#include "hw_control.h" +#include "img_msvdx_vdmc_regs.h" +#include "img_pvdec_core_regs.h" +#include "img_pvdec_pixel_regs.h" +#include "img_pvdec_test_regs.h" +#include "img_vdec_fw_msg.h" +#include "img_video_bus4_mmu_regs.h" +#include "img_msvdx_core_regs.h" +#include "reg_io2.h" +#include "vdecdd_defs.h" +#include "vxd_dec.h" +#include "vxd_ext.h" +#include "vxd_int.h" +#include "vxd_pvdec_priv.h" + +#define MSG_GROUP_MASK 0xf0 + +struct hwctrl_ctx { + unsigned int is_initialised; + unsigned int is_on_seq_replay; + unsigned int replay_tid; + unsigned int num_pipes; + struct vdecdd_dd_devconfig devconfig; + void *hndl_vxd; + void *dec_core; + void *comp_init_userdata; + struct vidio_ddbufinfo dev_ptd_bufinfo; + struct lst_t pend_pict_list; + struct hwctrl_msgstatus host_msg_status; + void *hmsg_task_event; + void *hmsg_task_kick; + void *hmsg_task; + unsigned int is_msg_task_active; + struct hwctrl_state state; + struct hwctrl_state prev_state; + unsigned int is_prev_hw_state_set; + unsigned int is_fatal_state; +}; + +struct vdeckm_context { + unsigned int core_num; + struct vxd_coreprops props; + unsigned short current_msgid; + unsigned char reader_active; + void *comms_ram_addr; + unsigned int state_offset; + unsigned int state_size; +}; + +/* + * Panic reason identifier. + */ +enum pvdec_panic_reason { + PANIC_REASON_OTHER = 0, + PANIC_REASON_WDT, + PANIC_REASON_READ_TIMEOUT, + PANIC_REASON_CMD_TIMEOUT, + PANIC_REASON_MMU_FAULT, + PANIC_REASON_MAX, + PANIC_REASON_FORCE32BITS = 0x7FFFFFFFU +}; + +/* + * Panic reason strings. + * NOTE: Should match the pvdec_panic_reason ids. + */ +static unsigned char *apanic_reason[PANIC_REASON_MAX] = { + [PANIC_REASON_OTHER] = "Other", + [PANIC_REASON_WDT] = "Watch Dog Timeout", + [PANIC_REASON_READ_TIMEOUT] = "Read Timeout", + [PANIC_REASON_CMD_TIMEOUT] = "Command Timeout", + [PANIC_REASON_MMU_FAULT] = "MMU Page Fault" +}; + +/* + * Maximum length of the panic reason string. + */ +#define PANIC_REASON_LEN (255) + +static struct vdeckm_context acore_ctx[VXD_MAX_CORES] = {0}; + +static int vdeckm_getregsoffsets(const void *hndl_vxd, + struct decoder_regsoffsets *regs_offsets) +{ + struct vdeckm_context *core_ctx = (struct vdeckm_context *)hndl_vxd; + + if (!core_ctx) + return IMG_ERROR_INVALID_PARAMETERS; + + regs_offsets->vdmc_cmd_offset = MSVDX_CMD_OFFSET; + regs_offsets->vec_offset = MSVDX_VEC_OFFSET; + regs_offsets->entropy_offset = PVDEC_ENTROPY_OFFSET; + regs_offsets->vec_be_regs_offset = PVDEC_VEC_BE_OFFSET; + regs_offsets->vdec_be_codec_regs_offset = PVDEC_VEC_BE_CODEC_OFFSET; + + return IMG_SUCCESS; +} + +static int vdeckm_send_message(const void *hndl_vxd, + struct hwctrl_to_kernel_msg *to_kernelmsg, + void *vxd_dec_ctx) +{ + struct vdeckm_context *core_ctx = (struct vdeckm_context *)hndl_vxd; + unsigned int count = 0; + unsigned int *msg; + + if (!core_ctx || !to_kernelmsg) + return IMG_ERROR_INVALID_PARAMETERS; + + msg = kzalloc(VXD_SIZE_MSG_BUFFER, GFP_KERNEL); + if (!msg) + return IMG_ERROR_OUT_OF_MEMORY; + + msg[count++] = to_kernelmsg->flags; + msg[count++] = to_kernelmsg->msg_size; + + memcpy(&msg[count], to_kernelmsg->msg_hdr, to_kernelmsg->msg_size); + + core_ctx->reader_active = 1; + + if (!(to_kernelmsg->msg_hdr)) { + kfree(msg); + return IMG_ERROR_INVALID_PARAMETERS; + } + + pr_debug("[HWCTRL] adding message to vxd queue\n"); + vxd_send_msg(vxd_dec_ctx, (struct vxd_fw_msg *)msg); + + kfree(msg); + + return 0; +} + +static void vdeckm_return_msg(const void *hndl_vxd, + struct hwctrl_to_kernel_msg *to_kernelmsg) +{ + if (to_kernelmsg) + kfree(to_kernelmsg->msg_hdr); +} + +static int vdeckm_handle_mtxtohost_msg(unsigned int *msg, struct lst_t *pend_pict_list, + enum vxd_msg_attr *msg_attr, + struct dec_decpict **decpict, + unsigned char msg_type, + unsigned int trans_id) +{ + struct dec_decpict *pdec_pict; + + switch (msg_type) { + case FW_DEVA_COMPLETED: + { + struct dec_pict_attrs *pict_attrs = NULL; + unsigned short error_flags = 0; + unsigned int no_bewdts = 0; + unsigned int mbs_dropped = 0; + unsigned int mbs_recovered = 0; + unsigned char flag = 0; + + pr_debug("Received message from firmware\n"); + error_flags = MEMIO_READ_FIELD(msg, FW_DEVA_COMPLETED_ERROR_FLAGS); + + no_bewdts = MEMIO_READ_FIELD(msg, FW_DEVA_COMPLETED_NUM_BEWDTS); + + mbs_dropped = MEMIO_READ_FIELD(msg, FW_DEVA_COMPLETED_NUM_MBSDROPPED); + + mbs_recovered = MEMIO_READ_FIELD(msg, FW_DEVA_COMPLETED_NUM_MBSRECOVERED); + + pdec_pict = lst_first(pend_pict_list); + while (pdec_pict) { + if (pdec_pict->transaction_id == trans_id) + break; + pdec_pict = lst_next(pdec_pict); + } + /* + * We must have a picture in the list that matches + * the transaction id + */ + if (!pdec_pict) + return IMG_ERROR_FATAL; + + if (!(pdec_pict->first_fld_fwmsg) || !(pdec_pict->second_fld_fwmsg)) + return IMG_ERROR_FATAL; + + flag = pdec_pict->first_fld_fwmsg->pict_attrs.first_fld_rcvd; + if (flag) { + pict_attrs = &pdec_pict->second_fld_fwmsg->pict_attrs; + } else { + pict_attrs = &pdec_pict->first_fld_fwmsg->pict_attrs; + flag = 1; + } + + pict_attrs->fe_err = (unsigned int)error_flags; + pict_attrs->no_be_wdt = no_bewdts; + pict_attrs->mbs_dropped = mbs_dropped; + pict_attrs->mbs_recovered = mbs_recovered; + /* + * We may successfully replayed the picture, + * so reset the error flags + */ + pict_attrs->pict_attrs.dwrfired = 0; + pict_attrs->pict_attrs.mmufault = 0; + pict_attrs->pict_attrs.deverror = 0; + + *msg_attr = VXD_MSG_ATTR_DECODED; + *decpict = pdec_pict; + break; + } + + case FW_DEVA_PANIC: + { + unsigned int panic_info = MEMIO_READ_FIELD(msg, FW_DEVA_PANIC_ERROR_INT); + unsigned char panic_reason[PANIC_REASON_LEN] = "Reason(s): "; + unsigned char is_panic_reson_identified = 0; + /* + * Create panic reason string. + */ + if (REGIO_READ_FIELD(panic_info, PVDEC_CORE, CR_PVDEC_HOST_INTERRUPT_STATUS, + CR_HOST_SYS_WDT)) { + strncat(panic_reason, apanic_reason[PANIC_REASON_WDT], + PANIC_REASON_LEN - 1); + is_panic_reson_identified = 1; + } + if (REGIO_READ_FIELD(panic_info, PVDEC_CORE, CR_PVDEC_HOST_INTERRUPT_STATUS, + CR_HOST_READ_TIMEOUT_PROC_IRQ)) { + strncat(panic_reason, apanic_reason[PANIC_REASON_READ_TIMEOUT], + PANIC_REASON_LEN - 1); + is_panic_reson_identified = 1; + } + if (REGIO_READ_FIELD(panic_info, PVDEC_CORE, CR_PVDEC_HOST_INTERRUPT_STATUS, + CR_HOST_COMMAND_TIMEOUT_PROC_IRQ)) { + strncat(panic_reason, apanic_reason[PANIC_REASON_CMD_TIMEOUT], + PANIC_REASON_LEN - 1); + is_panic_reson_identified = 1; + } + if (!is_panic_reson_identified) { + strncat(panic_reason, apanic_reason[PANIC_REASON_OTHER], + PANIC_REASON_LEN - 1); + } + panic_reason[strlen(panic_reason) - 2] = 0; + if (trans_id != 0) + pr_err("TID=0x%08X [FIRMWARE PANIC %s]\n", trans_id, panic_reason); + else + pr_err("TID=NULL [GENERAL FIRMWARE PANIC %s]\n", panic_reason); + + break; + } + + case FW_ASSERT: + { + unsigned int fwfile_namehash = MEMIO_READ_FIELD(msg, FW_ASSERT_FILE_NAME_HASH); + unsigned int fwfile_line = MEMIO_READ_FIELD(msg, FW_ASSERT_FILE_LINE); + + pr_err("ASSERT file name hash:0x%08X line number:%d\n", + fwfile_namehash, fwfile_line); + break; + } + + case FW_SO: + { + unsigned int task_name = MEMIO_READ_FIELD(msg, FW_SO_TASK_NAME); + unsigned char sztaskname[sizeof(unsigned int) + 1]; + + sztaskname[0] = task_name >> 24; + sztaskname[1] = (task_name >> 16) & 0xff; + sztaskname[2] = (task_name >> 8) & 0xff; + sztaskname[3] = task_name & 0xff; + if (sztaskname[3] != 0) + sztaskname[4] = 0; + pr_warn("STACK OVERFLOW for %s task\n", sztaskname); + break; + } + + case FW_VXD_EMPTY_COMPL: + /* + * Empty completion message sent as response to init, + * configure etc The architecture of vxd.ko module + * requires the firmware to send a reply for every + * message submitted by the user space. + */ + break; + + default: + break; + } + + return 0; +} + +static int vdeckm_handle_hosttomtx_msg(unsigned int *msg, struct lst_t *pend_pict_list, + enum vxd_msg_attr *msg_attr, + struct dec_decpict **decpict, + unsigned char msg_type, + unsigned int trans_id, + unsigned int msg_flags) +{ + struct dec_decpict *pdec_pict; + + pr_debug("Received message from HOST\n"); + + switch (msg_type) { + case FW_DEVA_PARSE: + { + struct dec_pict_attrs *pict_attrs = NULL; + unsigned char flag = 0; + + pdec_pict = lst_first(pend_pict_list); + while (pdec_pict) { + if (pdec_pict->transaction_id == trans_id) + break; + + pdec_pict = lst_next(pdec_pict); + } + + /* + * We must have a picture in the list that matches + * the transaction id + */ + if (!pdec_pict) { + pr_err("Firmware decoded message received\n"); + pr_err("no pending picture\n"); + return IMG_ERROR_FATAL; + } + + if (!(pdec_pict->first_fld_fwmsg) || !(pdec_pict->second_fld_fwmsg)) { + pr_err("invalid pending picture struct\n"); + return IMG_ERROR_FATAL; + } + + flag = pdec_pict->first_fld_fwmsg->pict_attrs.first_fld_rcvd; + if (flag) { + pict_attrs = &pdec_pict->second_fld_fwmsg->pict_attrs; + } else { + pict_attrs = &pdec_pict->first_fld_fwmsg->pict_attrs; + flag = 1; + } + + /* + * The below info is fetched from firmware state + * afterwards, so just set this to zero for now. + */ + pict_attrs->fe_err = 0; + pict_attrs->no_be_wdt = 0; + pict_attrs->mbs_dropped = 0; + pict_attrs->mbs_recovered = 0; + + vxd_get_pictattrs(msg_flags, &pict_attrs->pict_attrs); + vxd_get_msgerrattr(msg_flags, msg_attr); + + if (*msg_attr == VXD_MSG_ATTR_FATAL) + pr_err("[TID=0x%08X] [DECODE_FAILED]\n", trans_id); + if (*msg_attr == VXD_MSG_ATTR_CANCELED) + pr_err("[TID=0x%08X] [DECODE_CANCELED]\n", trans_id); + + *decpict = pdec_pict; + break; + } + + case FW_DEVA_PARSE_FRAGMENT: + /* + * Do nothing - Picture holds the list of fragments. + * So, in case of any error those would be replayed + * anyway. + */ + break; + default: + pr_warn("Unknown message received 0x%02x\n", msg_type); + break; + } + + return 0; +} + +static int vdeckm_process_msg(const void *hndl_vxd, unsigned int *msg, + struct lst_t *pend_pict_list, + unsigned int msg_flags, + enum vxd_msg_attr *msg_attr, + struct dec_decpict **decpict) +{ + struct vdeckm_context *core_ctx = (struct vdeckm_context *)hndl_vxd; + unsigned char msg_type; + unsigned char msg_group; + unsigned int trans_id = 0; + struct vdec_pict_hwcrc *pict_hwcrc = NULL; + struct dec_decpict *pdec_pict; + + if (!core_ctx || !msg || !msg_attr || !pend_pict_list || !decpict) + return IMG_ERROR_INVALID_PARAMETERS; + + *msg_attr = VXD_MSG_ATTR_NONE; + *decpict = NULL; + + trans_id = MEMIO_READ_FIELD(msg, FW_DEVA_GENMSG_TRANS_ID); + msg_type = MEMIO_READ_FIELD(msg, FW_DEVA_GENMSG_MSG_TYPE); + msg_group = msg_type & MSG_GROUP_MASK; + + switch (msg_group) { + case MSG_TYPE_START_PSR_MTXHOST_MSG: + vdeckm_handle_mtxtohost_msg(msg, pend_pict_list, msg_attr, + decpict, msg_type, trans_id); + break; + /* + * Picture decode has been returned as unprocessed. + * Locate the picture with corresponding TID and mark + * it as decoded with errors. + */ + case MSG_TYPE_START_PSR_HOSTMTX_MSG: + vdeckm_handle_hosttomtx_msg(msg, pend_pict_list, msg_attr, + decpict, msg_type, trans_id, + msg_flags); + break; + + case FW_DEVA_SIGNATURES_HEVC: + case FW_DEVA_SIGNATURES_LEGACY: + { + unsigned int *signatures = msg + (FW_DEVA_SIGNATURES_SIGNATURES_OFFSET / + sizeof(unsigned int)); + unsigned char sigcount = MEMIO_READ_FIELD(msg, FW_DEVA_SIGNATURES_MSG_SIZE) - + ((FW_DEVA_SIGNATURES_SIZE / sizeof(unsigned int)) - 1); + unsigned int selected = MEMIO_READ_FIELD(msg, FW_DEVA_SIGNATURES_SIGNATURE_SELECT); + unsigned char i, j = 0; + + pdec_pict = lst_first(pend_pict_list); + while (pdec_pict) { + if (pdec_pict->transaction_id == trans_id) + break; + pdec_pict = lst_next(pdec_pict); + } + + /* We must have a picture in the list that matches the tid */ + VDEC_ASSERT(pdec_pict); + if (!pdec_pict) { + pr_err("Firmware signatures message received with no pending picture\n"); + return IMG_ERROR_FATAL; + } + + VDEC_ASSERT(pdec_pict->first_fld_fwmsg); + VDEC_ASSERT(pdec_pict->second_fld_fwmsg); + if (!pdec_pict->first_fld_fwmsg || !pdec_pict->second_fld_fwmsg) { + pr_err("Invalid pending picture struct\n"); + return IMG_ERROR_FATAL; + } + if (pdec_pict->first_fld_fwmsg->pict_hwcrc.first_fld_rcvd) { + pict_hwcrc = &pdec_pict->second_fld_fwmsg->pict_hwcrc; + } else { + pict_hwcrc = &pdec_pict->first_fld_fwmsg->pict_hwcrc; + if (selected & (PVDEC_SIGNATURE_GROUP_20 | PVDEC_SIGNATURE_GROUP_24)) + pdec_pict->first_fld_fwmsg->pict_hwcrc.first_fld_rcvd = TRUE; + } + + for (i = 0; i < 32; i++) { + unsigned int group = selected & (1 << i); + + switch (group) { + case PVDEC_SIGNATURE_GROUP_20: + pict_hwcrc->crc_vdmc_pix_recon = signatures[j++]; + break; + + case PVDEC_SIGNATURE_GROUP_24: + pict_hwcrc->vdeb_sysmem_wrdata = signatures[j++]; + break; + + default: + break; + } + } + + /* sanity check */ + sigcount -= j; + VDEC_ASSERT(sigcount == 0); + + /* + * suppress PVDEC_SIGNATURE_GROUP_1 and notify + * only about groups used for verification + */ +#ifdef DEBUG_DECODER_DRIVER + if (selected & (PVDEC_SIGNATURE_GROUP_20 | PVDEC_SIGNATURE_GROUP_24)) + pr_info("[TID=0x%08X] [SIGNATURES]\n", trans_id); +#endif + + *decpict = pdec_pict; + + break; + } + + default: { +#ifdef DEBUG_DECODER_DRIVER + unsigned short msg_size, i; + + pr_warn("Unknown message type received: 0x%x", msg_type); + + msg_size = MEMIO_READ_FIELD(msg, FW_DEVA_GENMSG_MSG_SIZE); + + for (i = 0; i < msg_size; i++) + pr_info("0x%04x: 0x%08x\n", i, msg[i]); +#endif + break; + } + } + + return 0; +} + +static void vdeckm_vlr_copy(void *dst, void *src, unsigned int size) +{ + unsigned int *pdst = (unsigned int *)dst; + unsigned int *psrc = (unsigned int *)src; + + size /= 4; + while (size--) + *pdst++ = *psrc++; +} + +static int vdeckm_get_core_state(const void *hndl_vxd, struct vxd_states *state) +{ + struct vdeckm_context *core_ctx = (struct vdeckm_context *)hndl_vxd; + struct vdecfw_pvdecfirmwarestate firmware_state; + unsigned char pipe = 0; + +#ifdef ERROR_RECOVERY_SIMULATION + /* + * if disable_fw_irq_value is not zero, return error. If processed further + * the kernel will crash because we have ignored the interrupt, but here + * we will try to access comms_ram_addr which will result in crash. + */ + if (disable_fw_irq_value != 0) + return IMG_ERROR_INVALID_PARAMETERS; +#endif + + if (!core_ctx || !state) + return IMG_ERROR_INVALID_PARAMETERS; + + /* + * If state is requested for the first time. + */ + if (core_ctx->state_size == 0) { + unsigned int regval; + /* + * get the state buffer info. + */ + regval = *((unsigned int *)core_ctx->comms_ram_addr + + (PVDEC_COM_RAM_STATE_BUF_SIZE_AND_OFFSET_OFFSET / sizeof(unsigned int))); + core_ctx->state_size = PVDEC_COM_RAM_BUF_GET_SIZE(regval, STATE); + core_ctx->state_offset = PVDEC_COM_RAM_BUF_GET_OFFSET(regval, STATE); + } + + /* + * If state buffer is available. + */ + if (core_ctx->state_size) { + /* + * Determine the latest transaction to have passed each + * checkpoint in the firmware. + * Read the firmware state from VEC Local RAM + */ + vdeckm_vlr_copy(&firmware_state, (unsigned char *)core_ctx->comms_ram_addr + + core_ctx->state_offset, core_ctx->state_size); + + for (pipe = 0; pipe < core_ctx->props.num_pixel_pipes; pipe++) { + /* + * Set pipe presence. + */ + state->fw_state.pipe_state[pipe].is_pipe_present = 1; + + /* + * For checkpoints copy message ids here. These will + * be translated into transaction ids later. + */ + memcpy(state->fw_state.pipe_state[pipe].acheck_point, + firmware_state.pipestate[pipe].check_point, + sizeof(state->fw_state.pipe_state[pipe].acheck_point)); + state->fw_state.pipe_state[pipe].firmware_action = + firmware_state.pipestate[pipe].firmware_action; + state->fw_state.pipe_state[pipe].cur_codec = + firmware_state.pipestate[pipe].curr_codec; + state->fw_state.pipe_state[pipe].fe_slices = + firmware_state.pipestate[pipe].fe_slices; + state->fw_state.pipe_state[pipe].be_slices = + firmware_state.pipestate[pipe].be_slices; + state->fw_state.pipe_state[pipe].fe_errored_slices = + firmware_state.pipestate[pipe].fe_errored_slices; + state->fw_state.pipe_state[pipe].be_errored_slices = + firmware_state.pipestate[pipe].be_errored_slices; + state->fw_state.pipe_state[pipe].be_mbs_dropped = + firmware_state.pipestate[pipe].be_mbs_dropped; + state->fw_state.pipe_state[pipe].be_mbs_recovered = + firmware_state.pipestate[pipe].be_mbs_recovered; + state->fw_state.pipe_state[pipe].fe_mb.x = + firmware_state.pipestate[pipe].last_fe_mb_xy & 0xFF; + state->fw_state.pipe_state[pipe].fe_mb.y = + (firmware_state.pipestate[pipe].last_fe_mb_xy >> 16) & 0xFF; + state->fw_state.pipe_state[pipe].be_mb.x = + REGIO_READ_FIELD(firmware_state.pipestate[pipe].last_be_mb_xy, + MSVDX_VDMC, + CR_VDMC_MACROBLOCK_NUMBER, + CR_VDMC_MACROBLOCK_X_OFFSET); + state->fw_state.pipe_state[pipe].be_mb.y = + REGIO_READ_FIELD(firmware_state.pipestate[pipe].last_be_mb_xy, + MSVDX_VDMC, + CR_VDMC_MACROBLOCK_NUMBER, + CR_VDMC_MACROBLOCK_Y_OFFSET); + } + } + + return 0; +} + +static int vdeckm_prepare_batch(struct vdeckm_context *core_ctx, + const struct hwctrl_batch_msgdata *batch_msgdata, + unsigned char **msg) +{ + unsigned char vdec_flags = 0; + unsigned short flags = 0; + unsigned char *pmsg = kzalloc(FW_DEVA_DECODE_SIZE, GFP_KERNEL); + struct vidio_ddbufinfo *pbatch_msg_bufinfo = batch_msgdata->batchmsg_bufinfo; + + if (!pmsg) + return IMG_ERROR_MALLOC_FAILED; + + if (batch_msgdata->size_delimited_mode) + vdec_flags |= FW_VDEC_NAL_SIZE_DELIM; + + flags |= FW_DEVA_RENDER_HOST_INT; + + /* + * Message type and stream ID + */ + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_GENMSG_MSG_TYPE, FW_DEVA_PARSE, unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_CTRL_ALLOC_ADDR, + (unsigned int)pbatch_msg_bufinfo->dev_virt, unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_BUFFER_SIZE, + batch_msgdata->ctrl_alloc_bytes / sizeof(unsigned int), unsigned char*); + + /* + * Operating mode and decode flags + */ + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_OPERATING_MODE, batch_msgdata->operating_mode, + unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_FLAGS, flags, unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_VDEC_FLAGS, vdec_flags, unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_GENC_ID, batch_msgdata->genc_id, unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_MB_LOAD, batch_msgdata->mb_load, unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_STREAMID, + GET_STREAM_ID(batch_msgdata->transaction_id), unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_EXT_STATE_BUFFER, + (unsigned int)batch_msgdata->pvdec_fwctx->dev_virt, unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_MSG_ID, ++core_ctx->current_msgid, + unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_TRANS_ID, batch_msgdata->transaction_id, + unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_TILE_CFG, batch_msgdata->tile_cfg, unsigned char*); + + /* + * size of message + */ + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_GENMSG_MSG_SIZE, + FW_DEVA_DECODE_SIZE / sizeof(unsigned int), unsigned char*); + + *msg = pmsg; + + return 0; +} + +static int vdeckm_prepare_fragment(struct vdeckm_context *core_ctx, + const struct hwctrl_fragment_msgdata + *fragment_msgdata, + unsigned char **msg) +{ + struct vidio_ddbufinfo *pbatch_msg_bufinfo = NULL; + unsigned char *pmsg = NULL; + + pbatch_msg_bufinfo = fragment_msgdata->batchmsg_bufinfo; + + if (!(fragment_msgdata->batchmsg_bufinfo)) { + pr_err("Batch message info missing!\n"); + return IMG_ERROR_INVALID_PARAMETERS; + } + + pmsg = kzalloc(FW_DEVA_DECODE_FRAGMENT_SIZE, GFP_KERNEL); + if (!pmsg) + return IMG_ERROR_MALLOC_FAILED; + /* + * message type and stream id + */ + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_GENMSG_MSG_TYPE, + FW_DEVA_PARSE_FRAGMENT, unsigned char*); + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_MSG_ID, ++core_ctx->current_msgid, unsigned char*); + + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_FRAGMENT_CTRL_ALLOC_ADDR, + (unsigned int)pbatch_msg_bufinfo->dev_virt + + fragment_msgdata->ctrl_alloc_offset, unsigned char*); + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_DECODE_FRAGMENT_BUFFER_SIZE, + fragment_msgdata->ctrl_alloc_bytes / sizeof(unsigned int), + unsigned char*); + + /* + * size of message + */ + MEMIO_WRITE_FIELD(pmsg, FW_DEVA_GENMSG_MSG_SIZE, + FW_DEVA_DECODE_FRAGMENT_SIZE / sizeof(unsigned int), unsigned char*); + + *msg = pmsg; + + return 0; +} + +static int vdeckm_get_message(const void *hndl_vxd, const enum hwctrl_msgid msgid, + const struct hwctrl_msgdata *msgdata, + struct hwctrl_to_kernel_msg *to_kernelmsg) +{ + unsigned int result = 0; + struct vdeckm_context *core_ctx = (struct vdeckm_context *)hndl_vxd; + + if (!core_ctx || !to_kernelmsg || !msgdata) + return IMG_ERROR_INVALID_PARAMETERS; + + switch (msgid) { + case HWCTRL_MSGID_BATCH: + result = vdeckm_prepare_batch(core_ctx, &msgdata->batch_msgdata, + &to_kernelmsg->msg_hdr); + break; + + case HWCTRL_MSGID_FRAGMENT: + result = vdeckm_prepare_fragment(core_ctx, &msgdata->fragment_msgdata, + &to_kernelmsg->msg_hdr); + vxd_set_msgflag(VXD_MSG_FLAG_DROP, &to_kernelmsg->flags); + break; + + default: + result = IMG_ERROR_GENERIC_FAILURE; + pr_err("got a message that is not supported by PVDEC"); + break; + } + + if (result == 0) { + /* Set the stream ID for the next message to be sent. */ + to_kernelmsg->km_str_id = msgdata->km_str_id; + to_kernelmsg->msg_size = MEMIO_READ_FIELD(to_kernelmsg->msg_hdr, + FW_DEVA_GENMSG_MSG_SIZE) * + sizeof(unsigned int); + } + + return result; +} + +static void hwctrl_dump_state(struct vxd_states *prev_state, + struct vxd_states *cur_state, + unsigned char pipe_minus1) +{ + pr_info("Back-End MbX [% 10d]", + prev_state->fw_state.pipe_state[pipe_minus1].be_mb.x); + pr_info("Back-End MbY [% 10d]", + prev_state->fw_state.pipe_state[pipe_minus1].be_mb.y); + pr_info("Front-End MbX [% 10d]", + prev_state->fw_state.pipe_state[pipe_minus1].fe_mb.x); + pr_info("Front-End MbY [% 10d]", + prev_state->fw_state.pipe_state[pipe_minus1].fe_mb.y); + pr_info("VDECFW_CHECKPOINT_BE_PICTURE_COMPLETE [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_BE_PICTURE_COMPLETE]); + pr_info("VDECFW_CHECKPOINT_BE_1SLICE_DONE [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_BE_1SLICE_DONE]); + pr_info("VDECFW_CHECKPOINT_BE_PICTURE_STARTED [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_BE_PICTURE_STARTED]); + pr_info("VDECFW_CHECKPOINT_FE_PICTURE_COMPLETE [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_FE_PICTURE_COMPLETE]); + pr_info("VDECFW_CHECKPOINT_FE_PARSE_DONE [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_FE_PARSE_DONE]); + pr_info("VDECFW_CHECKPOINT_FE_1SLICE_DONE [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_FE_1SLICE_DONE]); + pr_info("VDECFW_CHECKPOINT_ENTDEC_STARTED [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_ENTDEC_STARTED]); + pr_info("VDECFW_CHECKPOINT_FIRMWARE_SAVED [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_FIRMWARE_SAVED]); + pr_info("VDECFW_CHECKPOINT_PICMAN_COMPLETE [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_PICMAN_COMPLETE]); + pr_info("VDECFW_CHECKPOINT_FIRMWARE_READY [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_FIRMWARE_READY]); + pr_info("VDECFW_CHECKPOINT_PICTURE_STARTED [0x%08X]", + cur_state->fw_state.pipe_state[pipe_minus1].acheck_point + [VDECFW_CHECKPOINT_PICTURE_STARTED]); +} + +static unsigned int hwctrl_calculate_load(struct bspp_pict_hdr_info *pict_hdr_info) +{ + return (((pict_hdr_info->coded_frame_size.width + 15) / 16) + * ((pict_hdr_info->coded_frame_size.height + 15) / 16)); +} + +static int hwctrl_send_batch_message(struct hwctrl_ctx *hwctx, + struct dec_decpict *decpict, + void *vxd_dec_ctx) +{ + int result; + struct hwctrl_to_kernel_msg to_kernelmsg = {0}; + struct vidio_ddbufinfo *batchmsg_bufinfo = + decpict->batch_msginfo->ddbuf_info; + struct hwctrl_msgdata msg_data; + struct hwctrl_batch_msgdata *batch_msgdata = &msg_data.batch_msgdata; + + memset(&msg_data, 0, sizeof(msg_data)); + + msg_data.km_str_id = GET_STREAM_ID(decpict->transaction_id); + + batch_msgdata->batchmsg_bufinfo = batchmsg_bufinfo; + + batch_msgdata->transaction_id = decpict->transaction_id; + batch_msgdata->pvdec_fwctx = decpict->str_pvdec_fw_ctxbuf; + batch_msgdata->ctrl_alloc_bytes = decpict->ctrl_alloc_bytes; + batch_msgdata->operating_mode = decpict->operating_op; + batch_msgdata->genc_id = decpict->genc_id; + batch_msgdata->mb_load = hwctrl_calculate_load(decpict->pict_hdr_info); + batch_msgdata->size_delimited_mode = + (decpict->pict_hdr_info->parser_mode != VDECFW_SCP_ONLY) ? + (1) : (0); + + result = vdeckm_get_message(hwctx->hndl_vxd, HWCTRL_MSGID_BATCH, + &msg_data, &to_kernelmsg); + if (result != 0) { + pr_err("failed to get decode message\n"); + return result; + } + + pr_debug("[HWCTRL] send batch message\n"); + result = vdeckm_send_message(hwctx->hndl_vxd, &to_kernelmsg, + vxd_dec_ctx); + if (result != 0) + return result; + + vdeckm_return_msg(hwctx->hndl_vxd, &to_kernelmsg); + + return 0; +} + +int hwctrl_process_msg(void *hndl_hwctx, unsigned int msg_flags, unsigned int *msg, + struct dec_decpict **decpict) +{ + int result; + struct hwctrl_ctx *hwctx; + enum vxd_msg_attr msg_attr = VXD_MSG_ATTR_NONE; + struct dec_decpict *pdecpict = NULL; + unsigned int val_first = 0; + unsigned int val_sec = 0; + + if (!hndl_hwctx || !msg || !decpict) { + VDEC_ASSERT(0); + return IMG_ERROR_INVALID_PARAMETERS; + } + + hwctx = (struct hwctrl_ctx *)hndl_hwctx; + + *decpict = NULL; + + pr_debug("[HWCTRL] : process message\n"); + result = vdeckm_process_msg(hwctx->hndl_vxd, msg, &hwctx->pend_pict_list, msg_flags, + &msg_attr, &pdecpict); + + /* validate pointers before using them */ + if (!pdecpict || !pdecpict->first_fld_fwmsg || !pdecpict->second_fld_fwmsg) { + VDEC_ASSERT(0); + return -EIO; + } + + val_first = pdecpict->first_fld_fwmsg->pict_attrs.pict_attrs.deverror; + val_sec = pdecpict->second_fld_fwmsg->pict_attrs.pict_attrs.deverror; + + if (val_first || val_sec) + pr_err("device signaled critical error!!!\n"); + + if (msg_attr == VXD_MSG_ATTR_DECODED) { + pdecpict->state = DECODER_PICTURE_STATE_DECODED; + /* + * We have successfully decoded a picture as normally or + * after the replay. + * Mark HW is in good state. + */ + hwctx->is_fatal_state = 0; + } else if (msg_attr == VXD_MSG_ATTR_FATAL) { + struct hwctrl_state state; + unsigned char pipe_minus1 = 0; + + memset(&state, 0, sizeof(state)); + + result = hwctrl_get_core_status(hwctx, &state); + if (result == 0) { + hwctx->is_prev_hw_state_set = 1; + memcpy(&hwctx->prev_state, &state, sizeof(struct hwctrl_state)); + + for (pipe_minus1 = 0; pipe_minus1 < hwctx->num_pipes; + pipe_minus1++) { + hwctrl_dump_state(&state.core_state, &state.core_state, + pipe_minus1); + } + } + } + *decpict = pdecpict; + + return 0; +} + +int hwctrl_getcore_cached_status(void *hndl_hwctx, struct hwctrl_state *state) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + + if (hwctx->is_prev_hw_state_set) + memcpy(state, &hwctx->prev_state, sizeof(struct hwctrl_state)); + else + return IMG_ERROR_UNEXPECTED_STATE; + + return 0; +} + +int hwctrl_get_core_status(void *hndl_hwctx, struct hwctrl_state *state) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + unsigned int result = IMG_ERROR_GENERIC_FAILURE; + + if (!hwctx->is_fatal_state && state) { + struct vxd_states *pcorestate = NULL; + + pcorestate = &state->core_state; + + memset(pcorestate, 0, sizeof(*(pcorestate))); + + result = vdeckm_get_core_state(hwctx->hndl_vxd, pcorestate); + } + + return result; +} + +int hwctrl_is_on_seq_replay(void *hndl_hwctx) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + + return hwctx->is_on_seq_replay; +} + +int hwctrl_picture_submitbatch(void *hndl_hwctx, struct dec_decpict *decpict, void *vxd_dec_ctx) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + + if (hwctx->is_initialised) { + lst_add(&hwctx->pend_pict_list, decpict); + if (!hwctx->is_on_seq_replay) + return hwctrl_send_batch_message(hwctx, decpict, vxd_dec_ctx); + } + + return 0; +} + +int hwctrl_getpicpend_pictlist(void *hndl_hwctx, unsigned int transaction_id, + struct dec_decpict **decpict) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + struct dec_decpict *dec_pic; + + dec_pic = lst_first(&hwctx->pend_pict_list); + while (dec_pic) { + if (dec_pic->transaction_id == transaction_id) { + *decpict = dec_pic; + break; + } + dec_pic = lst_next(dec_pic); + } + + if (!dec_pic) + return IMG_ERROR_INVALID_ID; + + return 0; +} + +int hwctrl_peekheadpiclist(void *hndl_hwctx, struct dec_decpict **decpict) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + + if (hwctx) + *decpict = lst_first(&hwctx->pend_pict_list); + + if (*decpict) + return 0; + + return IMG_ERROR_GENERIC_FAILURE; +} + +int hwctrl_getdecodedpicture(void *hndl_hwctx, struct dec_decpict **decpict) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + + if (hwctx) { + struct dec_decpict *cur_decpict; + /* + * Ensure that this picture is in the list. + */ + cur_decpict = lst_first(&hwctx->pend_pict_list); + while (cur_decpict) { + if (cur_decpict->state == DECODER_PICTURE_STATE_DECODED) { + *decpict = cur_decpict; + return 0; + } + + cur_decpict = lst_next(cur_decpict); + } + } + + return IMG_ERROR_VALUE_OUT_OF_RANGE; +} + +void hwctrl_removefrom_piclist(void *hndl_hwctx, struct dec_decpict *decpict) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + + if (hwctx) { + struct dec_decpict *cur_decpict; + /* + * Ensure that this picture is in the list. + */ + cur_decpict = lst_first(&hwctx->pend_pict_list); + while (cur_decpict) { + if (cur_decpict == decpict) { + lst_remove(&hwctx->pend_pict_list, decpict); + break; + } + + cur_decpict = lst_next(cur_decpict); + } + } +} + +int hwctrl_getregsoffset(void *hndl_hwctx, struct decoder_regsoffsets *regs_offsets) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + + return vdeckm_getregsoffsets(hwctx->hndl_vxd, regs_offsets); +} + +static int pvdec_create(struct vxd_dev *vxd, struct vxd_coreprops *core_props, + void **hndl_vdeckm_context) +{ + struct vdeckm_context *corectx; + struct vxd_core_props hndl_core_props; + int result; + + if (!hndl_vdeckm_context || !core_props) + return IMG_ERROR_INVALID_PARAMETERS; + + /* + * Obtain core context. + */ + corectx = &acore_ctx[0]; + + memset(corectx, 0, sizeof(*corectx)); + + corectx->core_num = 0; + + result = vxd_pvdec_get_props(vxd->dev, vxd->reg_base, &hndl_core_props); + if (result != 0) + return result; + + vxd_get_coreproperties(&hndl_core_props, &corectx->props); + + memcpy(core_props, &corectx->props, sizeof(*core_props)); + + *hndl_vdeckm_context = corectx; + + return 0; +} + +int hwctrl_deinitialise(void *hndl_hwctx) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + + if (hwctx->is_initialised) { + kfree(hwctx); + hwctx = NULL; + } + + return 0; +} + +int hwctrl_initialise(void *dec_core, void *comp_int_userdata, + const struct vdecdd_dd_devconfig *dd_devconfig, + struct vxd_coreprops *core_props, void **hndl_hwctx) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)*hndl_hwctx; + int result; + + if (!hwctx) { + hwctx = kzalloc(sizeof(*(hwctx)), GFP_KERNEL); + if (!hwctx) + return IMG_ERROR_OUT_OF_MEMORY; + + *hndl_hwctx = hwctx; + } + + if (!hwctx->is_initialised) { + hwctx->hndl_vxd = ((struct dec_core_ctx *)dec_core)->dec_ctx->dev_handle; + result = pvdec_create(hwctx->hndl_vxd, core_props, &hwctx->hndl_vxd); + if (result != 0) + goto error; + + lst_init(&hwctx->pend_pict_list); + + hwctx->devconfig = *dd_devconfig; + hwctx->num_pipes = core_props->num_pixel_pipes; + hwctx->comp_init_userdata = comp_int_userdata; + hwctx->dec_core = dec_core; + hwctx->is_initialised = 1; + hwctx->is_on_seq_replay = 0; + hwctx->is_fatal_state = 0; + } + + return 0; +error: + hwctrl_deinitialise(*hndl_hwctx); + + return result; +} + +static int hwctrl_send_fragment_message(struct hwctrl_ctx *hwctx, + struct dec_pict_fragment *pict_fragment, + struct dec_decpict *decpict, + void *vxd_dec_ctx) +{ + int result; + struct hwctrl_to_kernel_msg to_kernelmsg = {0}; + struct hwctrl_msgdata msg_data; + struct hwctrl_fragment_msgdata *pfragment_msgdata = + &msg_data.fragment_msgdata; + + msg_data.km_str_id = GET_STREAM_ID(decpict->transaction_id); + + pfragment_msgdata->ctrl_alloc_bytes = pict_fragment->ctrl_alloc_bytes; + + pfragment_msgdata->ctrl_alloc_offset = pict_fragment->ctrl_alloc_offset; + + pfragment_msgdata->batchmsg_bufinfo = decpict->batch_msginfo->ddbuf_info; + + result = vdeckm_get_message(hwctx->hndl_vxd, HWCTRL_MSGID_FRAGMENT, &msg_data, + &to_kernelmsg); + if (result != 0) { + pr_err("Failed to get decode message\n"); + return result; + } + + result = vdeckm_send_message(hwctx->hndl_vxd, &to_kernelmsg, vxd_dec_ctx); + if (result != 0) + return result; + + vdeckm_return_msg(hwctx->hndl_vxd, &to_kernelmsg); + + return 0; +} + +int hwctrl_picture_submit_fragment(void *hndl_hwctx, + struct dec_pict_fragment *pict_fragment, + struct dec_decpict *decpict, + void *vxd_dec_ctx) +{ + struct hwctrl_ctx *hwctx = (struct hwctrl_ctx *)hndl_hwctx; + unsigned int result = 0; + + if (hwctx->is_initialised) { + result = hwctrl_send_fragment_message(hwctx, pict_fragment, + decpict, vxd_dec_ctx); + if (result != 0) + pr_err("Failed to send fragment message to firmware !"); + } + + return result; +} diff --git a/drivers/staging/media/vxd/decoder/hw_control.h b/drivers/staging/media/vxd/decoder/hw_control.h new file mode 100644 index 000000000000..3f430969b998 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/hw_control.h @@ -0,0 +1,144 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * VXD DEC Hardware control implementation + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + * Prashanth Kumar Amai + */ + +#ifndef _HW_CONTROL_H +#define _HW_CONTROL_H + +#include "bspp.h" +#include "decoder.h" +#include "fw_interface.h" +#include "img_dec_common.h" +#include "img_errors.h" +#include "lst.h" +#include "mem_io.h" +#include "vdecdd_defs.h" +#include "vdecfw_shared.h" +#include "vid_buf.h" +#include "vxd_ext.h" +#include "vxd_props.h" + +/* Size of additional buffers needed for each HEVC picture */ +#ifdef HAS_HEVC + +/* Empirically defined */ +#define MEM_TO_REG_BUF_SIZE 0x2000 + +/* + * Max. no. of slices found in stream db: approx. 2200, + * set MAX_SLICES to 2368 to get buffer size page aligned + */ +#define MAX_SLICES 2368 +#define SLICE_PARAMS_SIZE 64 +#define SLICE_PARAMS_BUF_SIZE (MAX_SLICES * SLICE_PARAMS_SIZE) + +/* + * Size of buffer for "above params" structure, sufficient for stream of width 8192 + * 192 * (8192/64) == 0x6000, see "above_param_size" in TRM + */ +#define ABOVE_PARAMS_BUF_SIZE 0x6000 +#endif + +enum hwctrl_msgid { + HWCTRL_MSGID_BATCH = 0, + HWCTRL_MSGID_FRAGMENT = 1, + CORE_MSGID_MAX, + CORE_MSGID_FORCE32BITS = 0x7FFFFFFFU +}; + +struct hwctrl_to_kernel_msg { + unsigned int msg_size; + unsigned int km_str_id; + unsigned int flags; + unsigned char *msg_hdr; +}; + +struct hwctrl_batch_msgdata { + struct vidio_ddbufinfo *batchmsg_bufinfo; + struct vidio_ddbufinfo *pvdec_fwctx; + unsigned int ctrl_alloc_bytes; + unsigned int operating_mode; + unsigned int transaction_id; + unsigned int tile_cfg; + unsigned int genc_id; + unsigned int mb_load; + unsigned int size_delimited_mode; +}; + +struct hwctrl_fragment_msgdata { + struct vidio_ddbufinfo *batchmsg_bufinfo; + unsigned int ctrl_alloc_offset; + unsigned int ctrl_alloc_bytes; +}; + +struct hwctrl_msgdata { + unsigned int km_str_id; + struct hwctrl_batch_msgdata batch_msgdata; + struct hwctrl_fragment_msgdata fragment_msgdata; +}; + +/* + * This structure contains MSVDX Message information. + */ +struct hwctrl_msgstatus { + unsigned char control_fence_id[VDECFW_MSGID_CONTROL_TYPES]; + unsigned char decode_fence_id[VDECFW_MSGID_DECODE_TYPES]; + unsigned char completion_fence_id[VDECFW_MSGID_COMPLETION_TYPES]; +}; + +/* + * this structure contains the HWCTRL Core state. + */ +struct hwctrl_state { + struct vxd_states core_state; + struct hwctrl_msgstatus fwmsg_status; + struct hwctrl_msgstatus hostmsg_status; +}; + +int hwctrl_picture_submit_fragment(void *hndl_hwctx, + struct dec_pict_fragment *pict_fragment, + struct dec_decpict *decpict, + void *vxd_dec_ctx); + +int hwctrl_process_msg(void *hndl_hwct, unsigned int msg_flags, unsigned int *msg, + struct dec_decpict **decpict); + +int hwctrl_getcore_cached_status(void *hndl_hwctx, struct hwctrl_state *state); + +int hwctrl_get_core_status(void *hndl_hwctx, struct hwctrl_state *state); + +int hwctrl_is_on_seq_replay(void *hndl_hwctx); + +int hwctrl_picture_submitbatch(void *hndl_hwctx, struct dec_decpict *decpict, + void *vxd_dec_ctx); + +int hwctrl_getpicpend_pictlist(void *hndl_hwctx, unsigned int transaction_id, + struct dec_decpict **decpict); + +int hwctrl_peekheadpiclist(void *hndl_hwctx, struct dec_decpict **decpict); + +int hwctrl_getdecodedpicture(void *hndl_hwctx, struct dec_decpict **decpict); + +void hwctrl_removefrom_piclist(void *hndl_hwctx, struct dec_decpict *decpict); + +int hwctrl_getregsoffset(void *hndl_hwctx, + struct decoder_regsoffsets *regs_offsets); + +int hwctrl_initialise(void *dec_core, void *comp_int_userdata, + const struct vdecdd_dd_devconfig *dd_devconfig, + struct vxd_coreprops *core_props, void **hndl_hwctx); + +int hwctrl_deinitialise(void *hndl_hwctx); + +#endif /* _HW_CONTROL_H */ From patchwork Wed Aug 18 14:10:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00942C4320A for ; Wed, 18 Aug 2021 14:12:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E1331610A7 for ; Wed, 18 Aug 2021 14:12:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238961AbhHRONI (ORCPT ); Wed, 18 Aug 2021 10:13:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239177AbhHROM7 (ORCPT ); Wed, 18 Aug 2021 10:12:59 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6B41C06129D for ; Wed, 18 Aug 2021 07:12:16 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id j187so2258312pfg.4 for ; Wed, 18 Aug 2021 07:12:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m++M81VE6OdBHGBRaES4X5rl8RRew/F1aidWQUBA3Dg=; b=XgUwth7g1HxRLL5bt6/ntSyf8x0/qNqatr1NwLtR5RO4qkBAag3GhX7VWYokMNmsLI d4DiI5PTnfwdhDK1mjHttmel4foqT/jj7H/M8rIsqL6pLl70PBX2kKwO0DYeLS4XO8qB ABQGon+dN9G1J+0aB2/Xyp9kG2VBOHPP1zwnYRBnQ8/lR66MoLKvSwv2VxebMttsw1w7 zxQPWKqTtU+4yowHeaC4+Bx+c3jXMVFVzsAOEJnFlX/neEcRHt5ZPKyzCqhicDv0pgHF FgHLAcosXUUgDrEy1MJasp53n0YSZQ8v+7PrHUw5aCAmB4JkwX067izvq7gyquD9ieM+ 63mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=m++M81VE6OdBHGBRaES4X5rl8RRew/F1aidWQUBA3Dg=; b=EgQm/vUqGjkVI7VkIeon5s1VmBAhFiVTmlZTM5KIwJC5VI6/w+wqp8PbSo4hBe5t+t 3EyOf6zj83pOgg2CnMZpIXl7kDFZRsH5VGtw3BvnfEL3+FfsKYtvwHMq6sMkdwwJREoa 6NEtrjVtfj/0s4/vqjhPhrasu2XF7kS0xKnpnlAumehwAvHldw/haRSGJL7VUzQUiNc6 9B6Imo0M7HN5hLbOcaDiqUu1wGpsH753UCkB5gCJH5E0bYhuRDTBet51OeRAflCSKc57 FGLS2ynk1nlqza66A3TKkHSIih1N3M+4izuFMLR+0vrSe/8PK9QZZri63+sZzEQpGCmj DzWw== MIME-Version: 1.0 X-Gm-Message-State: AOAM531Wv0hCT08rIuaEqXgNhyiJ+/tn3BvVc13iS+XddifHDR/0Zn9B 214GQE4yxP1QRh/tweqCval/S9OZdA5Lw3bVApCCSBH+qCtVrdFbTuCjdkwGaL44P3oD4xVfRP9 kdycGEFjjFas4LVaZ X-Google-Smtp-Source: ABdhPJzVod3IDoBobK2yD1TioSpwBm4BGtTVLyNIdws93umdsJPE5YKQacPp712ygVgHeZqd7UV9IA== X-Received: by 2002:a63:510a:: with SMTP id f10mr9154852pgb.249.1629295935997; Wed, 18 Aug 2021 07:12:15 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.12.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:12:15 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 07/30] v4l: vxd-dec: Add vxd core module Date: Wed, 18 Aug 2021 19:40:14 +0530 Message-Id: <20210818141037.19990-8-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya This patch prepares the picture commands for the firmware it includes reconstructed and alternate picture commands. Signed-off-by: Amit Makani Signed-off-by: Sidraya --- MAINTAINERS | 2 + drivers/staging/media/vxd/decoder/vxd_int.c | 1137 +++++++++++++++++++ drivers/staging/media/vxd/decoder/vxd_int.h | 128 +++ 3 files changed, 1267 insertions(+) create mode 100644 drivers/staging/media/vxd/decoder/vxd_int.c create mode 100644 drivers/staging/media/vxd/decoder/vxd_int.h diff --git a/MAINTAINERS b/MAINTAINERS index 2327ea12caa6..7b21ebfc61d4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19548,6 +19548,8 @@ F: drivers/staging/media/vxd/decoder/img_dec_common.h F: drivers/staging/media/vxd/decoder/vxd_core.c F: drivers/staging/media/vxd/decoder/vxd_dec.c F: drivers/staging/media/vxd/decoder/vxd_dec.h +F: drivers/staging/media/vxd/decoder/vxd_int.c +F: drivers/staging/media/vxd/decoder/vxd_int.h F: drivers/staging/media/vxd/decoder/vxd_pvdec.c F: drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h F: drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h diff --git a/drivers/staging/media/vxd/decoder/vxd_int.c b/drivers/staging/media/vxd/decoder/vxd_int.c new file mode 100644 index 000000000000..c75aef6deed1 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/vxd_int.c @@ -0,0 +1,1137 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * VXD DEC Common low level core interface component + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + * Prashanth Kumar Amai + */ + +#include +#include +#include +#include +#include + +#include "bspp.h" +#include "fw_interface.h" +#include "h264fw_data.h" +#include "img_errors.h" +#include "img_dec_common.h" +#include "img_pvdec_core_regs.h" +#include "img_pvdec_pixel_regs.h" +#include "img_pvdec_test_regs.h" +#include "img_vdec_fw_msg.h" +#include "img_video_bus4_mmu_regs.h" +#include "img_msvdx_core_regs.h" +#include "img_msvdx_cmds.h" +#include "reg_io2.h" +#include "scaler_setup.h" +#include "vdecdd_defs.h" +#include "vdecdd_utils.h" +#include "vdecfw_shared.h" +#include "vdec_defs.h" +#include "vxd_ext.h" +#include "vxd_int.h" +#include "vxd_props.h" + +#define MSVDX_CACHE_REF_OFFSET_V100 (72L) +#define MSVDX_CACHE_ROW_OFFSET_V100 (4L) + +#define MSVDX_CACHE_REF_OFFSET_V550 (144L) +#define MSVDX_CACHE_ROW_OFFSET_V550 (8L) + +#define GET_BITS(v, lb, n) (((v) >> (lb)) & ((1 << (n)) - 1)) +#define IS_PVDEC_PIPELINE(std) ((std) == VDEC_STD_HEVC ? 1 : 0) + +static int amsvdx_codecmode[VDEC_STD_MAX] = { + /* Invalid */ + -1, + /* MPEG2 */ + 3, + /* MPEG4 */ + 4, + /* H263 */ + 4, + /* H264 */ + 1, + /* VC1 */ + 2, + /* AVS */ + 5, + /* RealVideo (8) */ + 8, + /* JPEG */ + 0, + /* On2 VP6 */ + 10, + /* On2 VP8 */ + 11, + /* Invalid */ +#ifdef HAS_VP9 + /* On2 VP9 */ + 13, +#endif + /* Sorenson */ + 4, + /* HEVC */ + 12, +}; + +struct msvdx_scaler_coeff_cmds { + unsigned int acmd_horizluma_coeff[VDECFW_NUM_SCALE_COEFFS]; + unsigned int acmd_vertluma_coeff[VDECFW_NUM_SCALE_COEFFS]; + unsigned int acmd_horizchroma_coeff[VDECFW_NUM_SCALE_COEFFS]; + unsigned int acmd_vertchroma_coeff[VDECFW_NUM_SCALE_COEFFS]; +}; + +static struct vxd_vidstd_props astd_props[] = { + { VDEC_STD_MPEG2, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8, + PIXEL_FORMAT_420 }, + { VDEC_STD_MPEG4, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8, + PIXEL_FORMAT_420 }, + { VDEC_STD_H263, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8, + PIXEL_FORMAT_420 }, + { VDEC_STD_H264, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0x10000, 8, + 8, PIXEL_FORMAT_420 }, + { VDEC_STD_VC1, CORE_REVISION(7, 0, 0), 80, 16, 4096, 4096, 0, 8, 8, + PIXEL_FORMAT_420 }, + { VDEC_STD_AVS, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8, + PIXEL_FORMAT_420 }, + { VDEC_STD_REAL, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8, + PIXEL_FORMAT_420 }, + { VDEC_STD_JPEG, CORE_REVISION(7, 0, 0), 64, 16, 32768, 32768, 0, 8, 8, + PIXEL_FORMAT_444 }, + { VDEC_STD_VP6, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8, + PIXEL_FORMAT_420 }, + { VDEC_STD_VP8, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, 8, + PIXEL_FORMAT_420 }, + { VDEC_STD_SORENSON, CORE_REVISION(7, 0, 0), 64, 16, 4096, 4096, 0, 8, + 8, PIXEL_FORMAT_420 }, + { VDEC_STD_HEVC, CORE_REVISION(7, 0, 0), 64, 16, 8192, 8192, 0, 8, 8, + PIXEL_FORMAT_420 }, +}; + +enum vdec_msvdx_async_mode { + VDEC_MSVDX_ASYNC_NORMAL, + VDEC_MSVDX_ASYNC_VDMC, + VDEC_MSVDX_ASYNC_VDEB, + VDEC_MSVDX_ASYNC_FORCE32BITS = 0x7FFFFFFFU +}; + +/* MSVDX row strides for video buffers. */ +static const unsigned int amsvdx_64byte_row_stride[] = { + 384, 768, 1280, 1920, 512, 1024, 2048, 4096 +}; + +/* MSVDX row strides for jpeg buffers. */ +static const unsigned int amsvdx_jpeg_row_stride[] = { + 256, 384, 512, 768, 1024, 1536, 2048, 3072, 4096, 6144, 8192, 12288, 16384, 24576, 32768 +}; + +/* VXD Core major revision. */ +static unsigned int maj_rev; +/* VXD Core minor revision. */ +static unsigned int min_rev; +/* VXD Core maintenance revision. */ +static unsigned int maint_rev; + +static int get_stride_code(enum vdec_vid_std vidstd, unsigned int row_stride) +{ + unsigned int i; + + if (vidstd == VDEC_STD_JPEG) { + for (i = 0; i < (sizeof(amsvdx_jpeg_row_stride) / + sizeof(amsvdx_jpeg_row_stride[0])); i++) { + if (amsvdx_jpeg_row_stride[i] == row_stride) + return i; + } + } else { + for (i = 0; i < (sizeof(amsvdx_64byte_row_stride) / + sizeof(amsvdx_64byte_row_stride[0])); i++) { + if (amsvdx_64byte_row_stride[i] == row_stride) + return i; + } + } + + return -1; +} + +/* Obtains the hardware defined video profile. */ +static unsigned int vxd_getprofile(enum vdec_vid_std vidstd, unsigned int std_profile) +{ + unsigned int profile = 0; + + switch (vidstd) { + case VDEC_STD_H264: + switch (std_profile) { + case H264_PROFILE_BASELINE: + profile = 0; + break; + + /* + * Extended may be attempted as Baseline or + * Main depending on the constraint_set_flags + */ + case H264_PROFILE_EXTENDED: + case H264_PROFILE_MAIN: + profile = 1; + break; + + case H264_PROFILE_HIGH: + case H264_PROFILE_HIGH444: + case H264_PROFILE_HIGH422: + case H264_PROFILE_HIGH10: + case H264_PROFILE_CAVLC444: + case H264_PROFILE_MVC_HIGH: + case H264_PROFILE_MVC_STEREO: + profile = 2; + break; + default: + profile = 2; + break; + } + break; + + default: + profile = 0; + break; + } + + return profile; +} + +static int vxd_getcoreproperties(struct vxd_coreprops *coreprops, + unsigned int corerev, + unsigned int pvdec_coreid, unsigned int mmu_config0, + unsigned int mmu_config1, unsigned int *pixel_pipecfg, + unsigned int *pixel_misccfg, unsigned int max_framecfg) +{ + unsigned int group_id; + unsigned int core_id; + unsigned int core_config; + unsigned int extended_address_range; + unsigned char group_size = 0; + unsigned char pipe_minus1 = 0; + unsigned int max_h264_hw_chromaformat = 0; + unsigned int max_hevc_hw_chromaformat = 0; + unsigned int max_bitdepth_luma = 0; + unsigned int i; + + struct pvdec_core_rev core_rev; + + if (!coreprops || !pixel_pipecfg || !pixel_misccfg) + return IMG_ERROR_INVALID_PARAMETERS; + + /* PVDEC Core Revision Information */ + core_rev.maj_rev = REGIO_READ_FIELD(corerev, PVDEC_CORE, CR_PVDEC_CORE_REV, + CR_PVDEC_MAJOR_REV); + core_rev.min_rev = REGIO_READ_FIELD(corerev, PVDEC_CORE, CR_PVDEC_CORE_REV, + CR_PVDEC_MINOR_REV); + core_rev.maint_rev = REGIO_READ_FIELD(corerev, PVDEC_CORE, CR_PVDEC_CORE_REV, + CR_PVDEC_MAINT_REV); + + /* core id */ + group_id = REGIO_READ_FIELD(pvdec_coreid, PVDEC_CORE, CR_PVDEC_CORE_ID, CR_GROUP_ID); + core_id = REGIO_READ_FIELD(pvdec_coreid, PVDEC_CORE, CR_PVDEC_CORE_ID, CR_CORE_ID); + + /* Ensure that the core is IMG Video Decoder (PVDEC). */ + if (group_id != 3 || core_id != 3) + return IMG_ERROR_DEVICE_NOT_FOUND; + + core_config = REGIO_READ_FIELD(pvdec_coreid, PVDEC_CORE, + CR_PVDEC_CORE_ID, CR_PVDEC_CORE_CONFIG); + + memset(coreprops, 0, sizeof(*(coreprops))); + + /* Construct core version name. */ + snprintf(coreprops->aversion, VER_STR_LEN, "%d.%d.%d", + core_rev.maj_rev, core_rev.min_rev, core_rev.maint_rev); + + coreprops->mmu_support_stride_per_context = + REGIO_READ_FIELD(mmu_config1, IMG_VIDEO_BUS4_MMU, + MMU_CONFIG1, + SUPPORT_STRIDE_PER_CONTEXT) == 1 ? 1 : 0; + + coreprops->mmu_support_secure = REGIO_READ_FIELD(mmu_config1, IMG_VIDEO_BUS4_MMU, + MMU_CONFIG1, SUPPORT_SECURE) == 1 ? 1 : 0; + + extended_address_range = REGIO_READ_FIELD(mmu_config0, IMG_VIDEO_BUS4_MMU, + MMU_CONFIG0, EXTENDED_ADDR_RANGE); + + switch (extended_address_range) { + case 0: + coreprops->mmu_type = MMU_TYPE_32BIT; + break; + case 4: + coreprops->mmu_type = MMU_TYPE_36BIT; + break; + case 8: + coreprops->mmu_type = MMU_TYPE_40BIT; + break; + default: + return IMG_ERROR_NOT_SUPPORTED; + } + + group_size += REGIO_READ_FIELD(mmu_config0, IMG_VIDEO_BUS4_MMU, + MMU_CONFIG0, GROUP_OVERRIDE_SIZE); + + coreprops->num_entropy_pipes = core_config & 0xF; + coreprops->num_pixel_pipes = core_config >> 4 & 0xF; +#ifdef DEBUG_DECODER_DRIVER + pr_info("PVDEC revision %08x detected, id %08x.\n", corerev, core_id); + pr_info("Found %d entropy pipe(s), %d pixel pipe(s), %d group size", + coreprops->num_entropy_pipes, coreprops->num_pixel_pipes, + group_size); +#endif + + /* Set global rev info variables used by macros */ + maj_rev = core_rev.maj_rev; + min_rev = core_rev.min_rev; + maint_rev = core_rev.maint_rev; + + /* Default settings */ + for (i = 0; i < ARRAY_SIZE(astd_props); i++) { + struct vxd_vidstd_props *pvidstd_props = + &coreprops->vidstd_props[astd_props[i].vidstd]; + /* + * Update video standard properties if the core is beyond + * specified version and the properties are for newer cores + * than the previous. + */ + if (FROM_REV(MAJOR_REVISION((int)astd_props[i].core_rev), + MINOR_REVISION((int)astd_props[i].core_rev), + MAINT_REVISION((int)astd_props[i].core_rev), int) && + astd_props[i].core_rev >= pvidstd_props->core_rev) { + *pvidstd_props = astd_props[i]; + + if (pvidstd_props->vidstd != VDEC_STD_JPEG && + (FROM_REV(8, 0, 0, int)) && (pvidstd_props->vidstd == + VDEC_STD_HEVC ? 1 : 0)) { + /* + * override default values with values + * specified in HW (register does not + * exist in previous cores) + */ + pvidstd_props->max_width = + 2 << REGIO_READ_FIELD(max_framecfg, + PVDEC_PIXEL, + CR_MAX_FRAME_CONFIG, + CR_PVDEC_HOR_MSB); + + pvidstd_props->max_height = + 2 << REGIO_READ_FIELD(max_framecfg, + PVDEC_PIXEL, + CR_MAX_FRAME_CONFIG, + CR_PVDEC_VER_MSB); + } else if (pvidstd_props->vidstd != VDEC_STD_JPEG && + (FROM_REV(8, 0, 0, int))) { + pvidstd_props->max_width = + 2 << REGIO_READ_FIELD(max_framecfg, + PVDEC_PIXEL, + CR_MAX_FRAME_CONFIG, + CR_MSVDX_HOR_MSB); + + pvidstd_props->max_height = + 2 << REGIO_READ_FIELD(max_framecfg, + PVDEC_PIXEL, + CR_MAX_FRAME_CONFIG, + CR_MSVDX_VER_MSB); + } + } + } + + /* Populate the core properties. */ + if (GET_BITS(core_config, 11, 1)) + coreprops->hd_support = 1; + + for (pipe_minus1 = 0; pipe_minus1 < coreprops->num_pixel_pipes; + pipe_minus1++) { + unsigned int current_bitdepth = + GET_BITS(pixel_misccfg[pipe_minus1], 4, 3) + 8; + unsigned int current_h264_hw_chromaformat = + GET_BITS(pixel_misccfg[pipe_minus1], 0, 2); + unsigned int current_hevc_hw_chromaformat = + GET_BITS(pixel_misccfg[pipe_minus1], 2, 2); +#ifdef DEBUG_DECODER_DRIVER + pr_info("cur_bitdepth: %d cur_h264_hw_chromaformat: %d", + current_bitdepth, current_h264_hw_chromaformat); + pr_info("cur_hevc_hw_chromaformat: %d pipe_minus1: %d\n", + current_hevc_hw_chromaformat, pipe_minus1); +#endif + + if (GET_BITS(pixel_misccfg[pipe_minus1], 8, 1)) + coreprops->rotation_support[pipe_minus1] = 1; + + if (GET_BITS(pixel_misccfg[pipe_minus1], 9, 1)) + coreprops->scaling_support[pipe_minus1] = 1; + + coreprops->num_streams[pipe_minus1] = + GET_BITS(pixel_misccfg[pipe_minus1], 12, 2) + 1; + + /* Video standards. */ + coreprops->mpeg2[pipe_minus1] = + GET_BITS(pixel_pipecfg[pipe_minus1], 0, 1) ? 1 : 0; + coreprops->mpeg4[pipe_minus1] = + GET_BITS(pixel_pipecfg[pipe_minus1], 1, 1) ? 1 : 0; + coreprops->h264[pipe_minus1] = + GET_BITS(pixel_pipecfg[pipe_minus1], 2, 1) ? 1 : 0; + coreprops->vc1[pipe_minus1] = + GET_BITS(pixel_pipecfg[pipe_minus1], 3, 1) ? 1 : 0; + coreprops->jpeg[pipe_minus1] = + GET_BITS(pixel_pipecfg[pipe_minus1], 5, 1) ? 1 : 0; + coreprops->avs[pipe_minus1] = + GET_BITS(pixel_pipecfg[pipe_minus1], 7, 1) ? 1 : 0; + coreprops->real[pipe_minus1] = + GET_BITS(pixel_pipecfg[pipe_minus1], 8, 1) ? 1 : 0; + coreprops->vp6[pipe_minus1] = + GET_BITS(pixel_pipecfg[pipe_minus1], 9, 1) ? 1 : 0; + coreprops->vp8[pipe_minus1] = + GET_BITS(pixel_pipecfg[pipe_minus1], 10, 1) ? 1 : 0; + coreprops->hevc[pipe_minus1] = + GET_BITS(pixel_pipecfg[pipe_minus1], 22, 1) ? 1 : 0; + + max_bitdepth_luma = (max_bitdepth_luma > current_bitdepth ? + max_bitdepth_luma : current_bitdepth); + max_h264_hw_chromaformat = (max_h264_hw_chromaformat > + current_h264_hw_chromaformat ? max_h264_hw_chromaformat + : current_h264_hw_chromaformat); + max_hevc_hw_chromaformat = (max_hevc_hw_chromaformat > + current_hevc_hw_chromaformat ? max_hevc_hw_chromaformat + : current_hevc_hw_chromaformat); + } + + /* Override default bit-depth with value signalled explicitly by core. */ + coreprops->vidstd_props[0].max_luma_bitdepth = max_bitdepth_luma; + coreprops->vidstd_props[0].max_chroma_bitdepth = + coreprops->vidstd_props[0].max_luma_bitdepth; + + for (i = 1; i < VDEC_STD_MAX; i++) { + coreprops->vidstd_props[i].max_luma_bitdepth = + coreprops->vidstd_props[0].max_luma_bitdepth; + coreprops->vidstd_props[i].max_chroma_bitdepth = + coreprops->vidstd_props[0].max_chroma_bitdepth; + } + + switch (max_h264_hw_chromaformat) { + case 1: + coreprops->vidstd_props[VDEC_STD_H264].max_chroma_format = + PIXEL_FORMAT_420; + break; + + case 2: + coreprops->vidstd_props[VDEC_STD_H264].max_chroma_format = + PIXEL_FORMAT_422; + break; + + case 3: + coreprops->vidstd_props[VDEC_STD_H264].max_chroma_format = + PIXEL_FORMAT_444; + break; + + default: + break; + } + + switch (max_hevc_hw_chromaformat) { + case 1: + coreprops->vidstd_props[VDEC_STD_HEVC].max_chroma_format = + PIXEL_FORMAT_420; + break; + + case 2: + coreprops->vidstd_props[VDEC_STD_HEVC].max_chroma_format = + PIXEL_FORMAT_422; + break; + + case 3: + coreprops->vidstd_props[VDEC_STD_HEVC].max_chroma_format = + PIXEL_FORMAT_444; + break; + + default: + break; + } + + return 0; +} + +static unsigned char vxd_is_supported_byatleast_onepipe(const unsigned char *features, + unsigned int num_pipes) +{ + unsigned int i; + + VDEC_ASSERT(features); + VDEC_ASSERT(num_pipes <= VDEC_MAX_PIXEL_PIPES); + + for (i = 0; i < num_pipes; i++) { + if (features[i]) + return 1; + } + + return 0; +} + +void vxd_set_reconpictcmds(const struct vdecdd_str_unit *str_unit, + const struct vdec_str_configdata *str_configdata, + const struct vdec_str_opconfig *output_config, + const struct vxd_coreprops *coreprops, + const struct vxd_buffers *buffers, + unsigned int *pict_cmds) +{ + struct pixel_pixinfo *pixel_info; + unsigned int row_stride_code; + unsigned char benable_auxline_buf = 1; + + unsigned int coded_height; + unsigned int coded_width; + unsigned int disp_height; + unsigned int disp_width; + unsigned int profile; + unsigned char plane; + unsigned int y_stride; + unsigned int uv_stride; + unsigned int v_stride; + unsigned int cache_ref_offset; + unsigned int cache_row_offset; + + if (str_configdata->vid_std == VDEC_STD_JPEG) { + disp_height = 0; + disp_width = 0; + coded_height = 0; + coded_width = 0; + } else { + coded_height = ALIGN(str_unit->pict_hdr_info->coded_frame_size.height, + (str_unit->pict_hdr_info->field) ? + 2 * VDEC_MB_DIMENSION : VDEC_MB_DIMENSION); + /* Hardware field is coded size - 1 */ + coded_height -= 1; + + coded_width = ALIGN(str_unit->pict_hdr_info->coded_frame_size.width, + VDEC_MB_DIMENSION); + /* Hardware field is coded size - 1 */ + coded_width -= 1; + + disp_height = str_unit->pict_hdr_info->disp_info.enc_disp_region.height + + str_unit->pict_hdr_info->disp_info.enc_disp_region.left_offset - 1; + disp_width = str_unit->pict_hdr_info->disp_info.enc_disp_region.width + + str_unit->pict_hdr_info->disp_info.enc_disp_region.top_offset - 1; + } + /* + * Display picture size (DISPLAY_PICTURE) + * The display to be written is not the actual video size to be + * displayed but a number that has to differ from the coded pixel size + * by less than 1MB (coded_size-display_size <= 0x0F). Because H264 can + * have a different display size, we need to check and write + * the coded_size again in the display_size register if this condition + * is not fulfilled. + */ + if (str_configdata->vid_std != VDEC_STD_VC1 && ((coded_height - disp_height) > 0x0F)) { + REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_DISPLAY_PICTURE], + MSVDX_CMDS, DISPLAY_PICTURE_SIZE, + DISPLAY_PICTURE_HEIGHT, + coded_height, unsigned int); + } else { + REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_DISPLAY_PICTURE], + MSVDX_CMDS, DISPLAY_PICTURE_SIZE, + DISPLAY_PICTURE_HEIGHT, + disp_height, unsigned int); + } + + if (((coded_width - disp_width) > 0x0F)) { + REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_DISPLAY_PICTURE], + MSVDX_CMDS, DISPLAY_PICTURE_SIZE, + DISPLAY_PICTURE_WIDTH, + coded_width, unsigned int); + } else { + REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_DISPLAY_PICTURE], + MSVDX_CMDS, DISPLAY_PICTURE_SIZE, + DISPLAY_PICTURE_WIDTH, + disp_width, unsigned int); + } + + REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_CODED_PICTURE], + MSVDX_CMDS, CODED_PICTURE_SIZE, + CODED_PICTURE_HEIGHT, + coded_height, unsigned int); + REGIO_WRITE_FIELD_LITE(pict_cmds[VDECFW_CMD_CODED_PICTURE], + MSVDX_CMDS, CODED_PICTURE_SIZE, + CODED_PICTURE_WIDTH, + coded_width, unsigned int); + + /* + * For standards where dpb_diff != 1 and chroma format != 420 + * cache_ref_offset has to be calculated in the F/W. + */ + if (str_configdata->vid_std != VDEC_STD_HEVC && str_configdata->vid_std != VDEC_STD_H264) { + unsigned int log2_size, cache_size, luma_size; + unsigned char is_hevc_supported, is_hevc444_supported = 0; + + is_hevc_supported = + vxd_is_supported_byatleast_onepipe(coreprops->hevc, + coreprops->num_pixel_pipes); + + if (is_hevc_supported) { + is_hevc444_supported = + coreprops->vidstd_props[VDEC_STD_HEVC].max_chroma_format == + PIXEL_FORMAT_444 ? 1 : 0; + } + + log2_size = 9 + (is_hevc_supported ? 1 : 0) + (is_hevc444_supported ? 1 : 0); + cache_size = 3 << log2_size; + luma_size = (cache_size * 2) / 3; + cache_ref_offset = (luma_size * 15) / 32; + cache_ref_offset = (cache_ref_offset + 7) & (~7); + cache_row_offset = 0x0C; + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_MC_CACHE_CONFIGURATION], + MSVDX_CMDS, MC_CACHE_CONFIGURATION, + CONFIG_REF_CHROMA_ADJUST, 1, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_MC_CACHE_CONFIGURATION], + MSVDX_CMDS, MC_CACHE_CONFIGURATION, + CONFIG_REF_OFFSET, cache_ref_offset, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_MC_CACHE_CONFIGURATION], + MSVDX_CMDS, MC_CACHE_CONFIGURATION, + CONFIG_ROW_OFFSET, cache_row_offset, + unsigned int, unsigned int); + } + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], + MSVDX_CMDS, OPERATING_MODE, CODEC_MODE, + amsvdx_codecmode[str_configdata->vid_std], + unsigned int, unsigned int); + + profile = str_unit->seq_hdr_info->com_sequ_hdr_info.codec_profile; + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], + MSVDX_CMDS, OPERATING_MODE, CODEC_PROFILE, + vxd_getprofile(str_configdata->vid_std, profile), + unsigned int, unsigned int); + + plane = str_unit->seq_hdr_info->com_sequ_hdr_info.separate_chroma_planes; + pixel_info = &str_unit->seq_hdr_info->com_sequ_hdr_info.pixel_info; + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], + MSVDX_CMDS, OPERATING_MODE, CHROMA_FORMAT, plane ? + 0 : pixel_info->chroma_fmt, unsigned int, int); + + if (str_configdata->vid_std != VDEC_STD_JPEG) { + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXT_OP_MODE], + MSVDX_CMDS, EXT_OP_MODE, CHROMA_FORMAT_IDC, plane ? + 0 : pixel_get_hw_chroma_format_idc + (pixel_info->chroma_fmt_idc), + unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXT_OP_MODE], + MSVDX_CMDS, EXT_OP_MODE, MEMORY_PACKING, + output_config->pixel_info.mem_pkg == + PIXEL_BIT10_MP ? 1 : 0, unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXT_OP_MODE], + MSVDX_CMDS, EXT_OP_MODE, BIT_DEPTH_LUMA_MINUS8, + pixel_info->bitdepth_y - 8, + unsigned int, unsigned int); + + if (pixel_info->chroma_fmt_idc == PIXEL_FORMAT_MONO) { + /* + * For monochrome streams use the same bit depth for + * chroma and luma. + */ + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXT_OP_MODE], + MSVDX_CMDS, EXT_OP_MODE, + BIT_DEPTH_CHROMA_MINUS8, + pixel_info->bitdepth_y - 8, + unsigned int, unsigned int); + } else { + /* + * For normal streams use the appropriate bit depth for chroma. + */ + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXT_OP_MODE], MSVDX_CMDS, + EXT_OP_MODE, BIT_DEPTH_CHROMA_MINUS8, + pixel_info->bitdepth_c - 8, + unsigned int, unsigned int); + } + } else { + pict_cmds[VDECFW_CMD_EXT_OP_MODE] = 0; + } + + if (str_configdata->vid_std != VDEC_STD_JPEG) { + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], MSVDX_CMDS, + OPERATING_MODE, CHROMA_INTERLEAVED, + PIXEL_GET_HW_CHROMA_INTERLEAVED + (output_config->pixel_info.chroma_interleave), + unsigned int, int); + } + + if (str_configdata->vid_std == VDEC_STD_JPEG) { + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], + MSVDX_CMDS, OPERATING_MODE, ASYNC_MODE, + VDEC_MSVDX_ASYNC_VDMC, + unsigned int, unsigned int); + } + + if (str_configdata->vid_std == VDEC_STD_H264) { + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], MSVDX_CMDS, + OPERATING_MODE, ASYNC_MODE, + str_unit->pict_hdr_info->discontinuous_mbs ? + VDEC_MSVDX_ASYNC_VDMC : VDEC_MSVDX_ASYNC_NORMAL, + unsigned int, int); + } + + y_stride = buffers->recon_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_Y].stride; + uv_stride = buffers->recon_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_UV].stride; + v_stride = buffers->recon_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_V].stride; + + if (((y_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0) && + ((uv_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0) && + ((v_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0)) { + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], + MSVDX_CMDS, OPERATING_MODE, + USE_EXT_ROW_STRIDE, 1, unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_EXTENDED_ROW_STRIDE], + MSVDX_CMDS, EXTENDED_ROW_STRIDE, + EXT_ROW_STRIDE, y_stride >> 6, unsigned int, unsigned int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_CHROMA_ROW_STRIDE], + MSVDX_CMDS, CHROMA_ROW_STRIDE, + CHROMA_ROW_STRIDE, uv_stride >> 6, unsigned int, unsigned int); + } else { + row_stride_code = get_stride_code(str_configdata->vid_std, y_stride); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], + MSVDX_CMDS, OPERATING_MODE, ROW_STRIDE, + row_stride_code & 0x7, unsigned int, unsigned int); + + if (str_configdata->vid_std == VDEC_STD_JPEG) { + /* + * Use the unused chroma interleaved flag + * to hold MSB of row stride code + */ + IMG_ASSERT(row_stride_code < 16); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_OPERATING_MODE], + MSVDX_CMDS, OPERATING_MODE, + CHROMA_INTERLEAVED, + row_stride_code >> 3, unsigned int, unsigned int); + } else { + IMG_ASSERT(row_stride_code < 8); + } + } + pict_cmds[VDECFW_CMD_LUMA_RECONSTRUCTED_PICTURE_BASE_ADDRESS] = + (unsigned int)GET_HOST_ADDR(&buffers->recon_pict->pict_buf->ddbuf_info) + + buffers->recon_pict->rend_info.plane_info[0].offset; + + pict_cmds[VDECFW_CMD_CHROMA_RECONSTRUCTED_PICTURE_BASE_ADDRESS] = + (unsigned int)GET_HOST_ADDR(&buffers->recon_pict->pict_buf->ddbuf_info) + + buffers->recon_pict->rend_info.plane_info[1].offset; + + pict_cmds[VDECFW_CMD_CHROMA2_RECONSTRUCTED_PICTURE_BASE_ADDRESS] = + (unsigned int)GET_HOST_ADDR(&buffers->recon_pict->pict_buf->ddbuf_info) + + buffers->recon_pict->rend_info.plane_info[2].offset; + + pict_cmds[VDECFW_CMD_LUMA_ERROR_PICTURE_BASE_ADDRESS] = 0; + pict_cmds[VDECFW_CMD_CHROMA_ERROR_PICTURE_BASE_ADDRESS] = 0; + +#ifdef ERROR_CONCEALMENT + /* update error concealment frame info if available */ + if (buffers->err_pict_bufinfo) { + pict_cmds[VDECFW_CMD_LUMA_ERROR_PICTURE_BASE_ADDRESS] = + (unsigned int)GET_HOST_ADDR(buffers->err_pict_bufinfo) + + buffers->recon_pict->rend_info.plane_info[0].offset; + + pict_cmds[VDECFW_CMD_CHROMA_ERROR_PICTURE_BASE_ADDRESS] = + (unsigned int)GET_HOST_ADDR(buffers->err_pict_bufinfo) + + buffers->recon_pict->rend_info.plane_info[1].offset; + } +#endif + + pict_cmds[VDECFW_CMD_INTRA_BUFFER_BASE_ADDRESS] = + (unsigned int)GET_HOST_ADDR(buffers->intra_bufinfo); + pict_cmds[VDECFW_CMD_INTRA_BUFFER_PLANE_SIZE] = + buffers->intra_bufsize_per_pipe / 3; + pict_cmds[VDECFW_CMD_INTRA_BUFFER_SIZE_PER_PIPE] = + buffers->intra_bufsize_per_pipe; + pict_cmds[VDECFW_CMD_AUX_LINE_BUFFER_BASE_ADDRESS] = + (unsigned int)GET_HOST_ADDR(buffers->auxline_bufinfo); + pict_cmds[VDECFW_CMD_AUX_LINE_BUFFER_SIZE_PER_PIPE] = + buffers->auxline_bufsize_per_pipe; + + /* + * for pvdec we need to set this registers even if we don't + * use alternative output + */ + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + ALT_BIT_DEPTH_CHROMA_MINUS8, + output_config->pixel_info.bitdepth_c - 8, unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + ALT_BIT_DEPTH_LUMA_MINUS8, + output_config->pixel_info.bitdepth_y - 8, unsigned int, unsigned int); + + /* + * this is causing corruption in RV40 and VC1 streams with + * scaling/rotation enabled on Coral, so setting to 0 + */ + benable_auxline_buf = benable_auxline_buf && + (str_configdata->vid_std != VDEC_STD_REAL) && + (str_configdata->vid_std != VDEC_STD_VC1); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, + USE_AUX_LINE_BUF, benable_auxline_buf ? 1 : 0, unsigned int, int); +} + +void vxd_set_altpictcmds(const struct vdecdd_str_unit *str_unit, + const struct vdec_str_configdata *str_configdata, + const struct vdec_str_opconfig *output_config, + const struct vxd_coreprops *coreprops, + const struct vxd_buffers *buffers, + unsigned int *pict_cmds) +{ + unsigned int row_stride_code; + unsigned int y_stride; + unsigned int uv_stride; + unsigned int v_stride; + + y_stride = buffers->alt_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_Y].stride; + uv_stride = buffers->alt_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_UV].stride; + v_stride = buffers->alt_pict->rend_info.plane_info[VDEC_PLANE_VIDEO_V].stride; + + if (((y_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0) && + ((uv_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0) && + ((v_stride % (VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT)) == 0)) { + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, + USE_EXT_ROT_ROW_STRIDE, 1, unsigned int, int); + + /* 64-byte (min) aligned luma stride value. */ + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION], + MSVDX_CMDS, + ALTERNATIVE_OUTPUT_PICTURE_ROTATION, + EXT_ROT_ROW_STRIDE, y_stride >> 6, + unsigned int, unsigned int); + + /* 64-byte (min) aligned chroma stride value. */ + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_CHROMA_ROW_STRIDE], + MSVDX_CMDS, CHROMA_ROW_STRIDE, + ALT_CHROMA_ROW_STRIDE, uv_stride >> 6, + unsigned int, unsigned int); + } else { + /* + * Obtain the code for buffer stride + * (must be less than 8, i.e. not JPEG strides) + */ + row_stride_code = + get_stride_code(str_configdata->vid_std, y_stride); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION], + MSVDX_CMDS, + ALTERNATIVE_OUTPUT_PICTURE_ROTATION, + ROTATION_ROW_STRIDE, row_stride_code & 0x7, + unsigned int, unsigned int); + } + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, + SCALE_INPUT_SIZE_SEL, + ((output_config->pixel_info.chroma_fmt_idc != + str_unit->seq_hdr_info->com_sequ_hdr_info.pixel_info.chroma_fmt_idc)) ? + 1 : 0, unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_PICTURE_ROTATION], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_PICTURE_ROTATION, + PACKED_422_OUTPUT, + (output_config->pixel_info.chroma_fmt_idc == + PIXEL_FORMAT_422 && + output_config->pixel_info.num_planes == 1) ? 1 : 0, + unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + ALT_OUTPUT_FORMAT, + str_unit->seq_hdr_info->com_sequ_hdr_info.separate_chroma_planes ? + 0 : pixel_get_hw_chroma_format_idc + (output_config->pixel_info.chroma_fmt_idc), + unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + ALT_BIT_DEPTH_CHROMA_MINUS8, + output_config->pixel_info.bitdepth_c - 8, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + ALT_BIT_DEPTH_LUMA_MINUS8, + output_config->pixel_info.bitdepth_y - 8, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + ALT_MEMORY_PACKING, + (output_config->pixel_info.mem_pkg == + PIXEL_BIT10_MP) ? 1 : 0, unsigned int, int); + + pict_cmds[VDECFW_CMD_LUMA_ALTERNATIVE_PICTURE_BASE_ADDRESS] = + (unsigned int)GET_HOST_ADDR(&buffers->alt_pict->pict_buf->ddbuf_info) + + buffers->alt_pict->rend_info.plane_info[0].offset; + + pict_cmds[VDECFW_CMD_CHROMA_ALTERNATIVE_PICTURE_BASE_ADDRESS] = + (unsigned int)GET_HOST_ADDR(&buffers->alt_pict->pict_buf->ddbuf_info) + + buffers->alt_pict->rend_info.plane_info[1].offset; + + pict_cmds[VDECFW_CMD_CHROMA2_ALTERNATIVE_PICTURE_BASE_ADDRESS] = + (unsigned int)GET_HOST_ADDR(&buffers->alt_pict->pict_buf->ddbuf_info) + + buffers->alt_pict->rend_info.plane_info[2].offset; +} + +int vxd_getscalercmds(const struct scaler_config *scaler_config, + const struct scaler_pitch *pitch, + const struct scaler_filter *filter, + const struct pixel_pixinfo *out_loop_pixel_info, + struct scaler_params *params, + unsigned int *pict_cmds) +{ + const struct vxd_coreprops *coreprops = scaler_config->coreprops; + /* + * Indirectly detect decoder core type (if HEVC is supported, it has + * to be PVDEC core) and decide if to force luma re-sampling. + */ + unsigned char bforce_luma_resampling = coreprops->hevc[0]; + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + ALT_OUTPUT_FORMAT, + scaler_config->bseparate_chroma_planes ? 0 : + pixel_get_hw_chroma_format_idc(out_loop_pixel_info->chroma_fmt_idc), + unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + SCALE_CHROMA_RESAMP_ONLY, bforce_luma_resampling ? 0 : + (pitch->horiz_luma == FIXED(1, HIGHP)) && + (pitch->vert_luma == FIXED(1, HIGHP)), unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, ALT_MEMORY_PACKING, + pixel_get_hw_memory_packing(out_loop_pixel_info->mem_pkg), + unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + ALT_BIT_DEPTH_LUMA_MINUS8, + out_loop_pixel_info->bitdepth_y - 8, + unsigned int, unsigned int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + ALT_BIT_DEPTH_CHROMA_MINUS8, + out_loop_pixel_info->bitdepth_c - 8, + unsigned int, unsigned int); + + /* Scale luma bifilter is always 0 for now */ + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + SCALE_LUMA_BIFILTER_HORIZ, + 0, unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + SCALE_LUMA_BIFILTER_VERT, + 0, unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + SCALE_CHROMA_BIFILTER_HORIZ, + filter->bhoriz_bilinear ? 1 : 0, + unsigned int, int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_ALTERNATIVE_OUTPUT_CONTROL], + MSVDX_CMDS, ALTERNATIVE_OUTPUT_CONTROL, + SCALE_CHROMA_BIFILTER_VERT, + filter->bvert_bilinear ? 1 : 0, unsigned int, int); + + /* for cores 7.x.x and more, precision 3.13 */ + params->fixed_point_shift = 13; + + /* Calculate the fixed-point versions for use by the hardware. */ + params->vert_pitch = (int)((pitch->vert_luma + + (1 << (HIGHP - params->fixed_point_shift - 1))) >> + (HIGHP - params->fixed_point_shift)); + params->vert_startpos = params->vert_pitch >> 1; + params->vert_pitch_chroma = (int)((pitch->vert_chroma + + (1 << (HIGHP - params->fixed_point_shift - 1))) >> + (HIGHP - params->fixed_point_shift)); + params->vert_startpos_chroma = params->vert_pitch_chroma >> 1; + params->horz_pitch = (int)(pitch->horiz_luma >> + (HIGHP - params->fixed_point_shift)); + params->horz_startpos = params->horz_pitch >> 1; + params->horz_pitch_chroma = (int)(pitch->horiz_chroma >> + (HIGHP - params->fixed_point_shift)); + params->horz_startpos_chroma = params->horz_pitch_chroma >> 1; + +#ifdef HAS_HEVC + if (scaler_config->vidstd == VDEC_STD_HEVC) { + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE], + MSVDX_CMDS, PVDEC_SCALED_DISPLAY_SIZE, + PVDEC_SCALE_DISPLAY_WIDTH, + scaler_config->recon_width - 1, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE], + MSVDX_CMDS, PVDEC_SCALED_DISPLAY_SIZE, + PVDEC_SCALE_DISPLAY_HEIGHT, + scaler_config->recon_height - 1, + unsigned int, unsigned int); + } else { + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE], + MSVDX_CMDS, SCALED_DISPLAY_SIZE, + SCALE_DISPLAY_WIDTH, + scaler_config->recon_width - 1, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE], + MSVDX_CMDS, SCALED_DISPLAY_SIZE, + SCALE_DISPLAY_HEIGHT, + scaler_config->recon_height - 1, + unsigned int, unsigned int); + } +#else + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE], + MSVDX_CMDS, SCALED_DISPLAY_SIZE, + SCALE_DISPLAY_WIDTH, + scaler_config->recon_width - 1, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALED_DISPLAY_SIZE], + MSVDX_CMDS, SCALED_DISPLAY_SIZE, SCALE_DISPLAY_HEIGHT, + scaler_config->recon_height - 1, + unsigned int, unsigned int); +#endif + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_OUTPUT_SIZE], + MSVDX_CMDS, SCALE_OUTPUT_SIZE, + SCALE_OUTPUT_WIDTH_MIN1, + scaler_config->scale_width - 1, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_OUTPUT_SIZE], + MSVDX_CMDS, SCALE_OUTPUT_SIZE, + SCALE_OUTPUT_HEIGHT_MIN1, + scaler_config->scale_height - 1, + unsigned int, unsigned int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_HORIZONTAL_SCALE_CONTROL], + MSVDX_CMDS, HORIZONTAL_SCALE_CONTROL, + HORIZONTAL_SCALE_PITCH, params->horz_pitch, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_HORIZONTAL_SCALE_CONTROL], + MSVDX_CMDS, HORIZONTAL_SCALE_CONTROL, + HORIZONTAL_INITIAL_POS, params->horz_startpos, + unsigned int, unsigned int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_HORIZONTAL_CHROMA], + MSVDX_CMDS, SCALE_HORIZONTAL_CHROMA, + CHROMA_HORIZONTAL_PITCH, params->horz_pitch_chroma, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_HORIZONTAL_CHROMA], + MSVDX_CMDS, SCALE_HORIZONTAL_CHROMA, + CHROMA_HORIZONTAL_INITIAL, + params->horz_startpos_chroma, + unsigned int, unsigned int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_VERTICAL_SCALE_CONTROL], + MSVDX_CMDS, VERTICAL_SCALE_CONTROL, + VERTICAL_SCALE_PITCH, params->vert_pitch, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_VERTICAL_SCALE_CONTROL], + MSVDX_CMDS, VERTICAL_SCALE_CONTROL, + VERTICAL_INITIAL_POS, params->vert_startpos, + unsigned int, unsigned int); + + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_VERTICAL_CHROMA], + MSVDX_CMDS, SCALE_VERTICAL_CHROMA, + CHROMA_VERTICAL_PITCH, params->vert_pitch_chroma, + unsigned int, unsigned int); + REGIO_WRITE_FIELD(pict_cmds[VDECFW_CMD_SCALE_VERTICAL_CHROMA], + MSVDX_CMDS, SCALE_VERTICAL_CHROMA, + CHROMA_VERTICAL_INITIAL, + params->vert_startpos_chroma, + unsigned int, unsigned int); + return 0; +} + +unsigned int vxd_get_codedpicsize(unsigned short width_min1, unsigned short height_min1) +{ + unsigned int reg = 0; + + REGIO_WRITE_FIELD_LITE(reg, MSVDX_CMDS, CODED_PICTURE_SIZE, + CODED_PICTURE_WIDTH, width_min1, + unsigned short); + REGIO_WRITE_FIELD_LITE(reg, MSVDX_CMDS, CODED_PICTURE_SIZE, + CODED_PICTURE_HEIGHT, height_min1, + unsigned short); + + return reg; +} + +unsigned char vxd_get_codedmode(enum vdec_vid_std vidstd) +{ + return (unsigned char)amsvdx_codecmode[vidstd]; +} + +void vxd_get_coreproperties(void *hndl_coreproperties, + struct vxd_coreprops *vxd_coreprops) +{ + struct vxd_core_props *props = + (struct vxd_core_props *)hndl_coreproperties; + + vxd_getcoreproperties(vxd_coreprops, props->core_rev, + props->pvdec_core_id, + props->mmu_config0, + props->mmu_config1, + props->pixel_pipe_cfg, + props->pixel_misc_cfg, + props->pixel_max_frame_cfg); +} + +int vxd_get_pictattrs(unsigned int flags, struct vxd_pict_attrs *pict_attrs) +{ + if (flags & (VXD_FW_MSG_FLAG_DWR | VXD_FW_MSG_FLAG_FATAL)) + pict_attrs->dwrfired = 1; + if (flags & VXD_FW_MSG_FLAG_MMU_FAULT) + pict_attrs->mmufault = 1; + if (flags & VXD_FW_MSG_FLAG_DEV_ERR) + pict_attrs->deverror = 1; + + return 0; +} + +int vxd_get_msgerrattr(unsigned int flags, enum vxd_msg_attr *msg_attr) +{ + if ((flags & ~VXD_FW_MSG_FLAG_CANCELED)) + *msg_attr = VXD_MSG_ATTR_FATAL; + else if ((flags & VXD_FW_MSG_FLAG_CANCELED)) + *msg_attr = VXD_MSG_ATTR_CANCELED; + else + *msg_attr = VXD_MSG_ATTR_NONE; + + return 0; +} + +int vxd_set_msgflag(enum vxd_msg_flag input_flag, unsigned int *flags) +{ + switch (input_flag) { + case VXD_MSG_FLAG_DROP: + *flags |= VXD_FW_MSG_FLAG_DROP; + break; + case VXD_MSG_FLAG_EXCL: + *flags |= VXD_FW_MSG_FLAG_EXCL; + break; + default: + return IMG_ERROR_FATAL; + } + + return 0; +} diff --git a/drivers/staging/media/vxd/decoder/vxd_int.h b/drivers/staging/media/vxd/decoder/vxd_int.h new file mode 100644 index 000000000000..a294e0d6044f --- /dev/null +++ b/drivers/staging/media/vxd/decoder/vxd_int.h @@ -0,0 +1,128 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * VXD DEC Common low level core interface component + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + * Prashanth Kumar Amai + */ +#ifndef _VXD_INT_H +#define _VXD_INT_H + +#include "fw_interface.h" +#include "scaler_setup.h" +#include "vdecdd_defs.h" +#include "vdecfw_shared.h" +#include "vdec_defs.h" +#include "vxd_ext.h" +#include "vxd_props.h" + +/* + * Size of buffer used for batching messages + */ +#define BATCH_MSG_BUFFER_SIZE (8 * 4096) + +#define INTRA_BUF_SIZE (1024 * 32) +#define AUX_LINE_BUFFER_SIZE (512 * 1024) + +#define MAX_PICTURE_WIDTH (4096) +#define MAX_PICTURE_HEIGHT (4096) + +/* + * this macro returns the host address of device buffer. + */ +#define GET_HOST_ADDR(buf) ((buf)->dev_virt) + +#define GET_HOST_ADDR_OFFSET(buf, offset) (((buf)->dev_virt) + (offset)) + +/* + * The extended stride alignment for VXD. + */ +#define VDEC_VXD_EXT_STRIDE_ALIGNMENT_DEFAULT (64) + +struct vxd_buffers { + struct vdecdd_ddpict_buf *recon_pict; + struct vdecdd_ddpict_buf *alt_pict; + struct vidio_ddbufinfo *intra_bufinfo; + struct vidio_ddbufinfo *auxline_bufinfo; + struct vidio_ddbufinfo *err_pict_bufinfo; + unsigned int intra_bufsize_per_pipe; + unsigned int auxline_bufsize_per_pipe; + struct vidio_ddbufinfo *msb_bufinfo; + unsigned char btwopass; +}; + +struct pvdec_core_rev { + unsigned int maj_rev; + unsigned int min_rev; + unsigned int maint_rev; + unsigned int int_rev; +}; + +/* + * this has all that it needs to translate a Stream Unit for a picture + * into a transaction. + */ +void vxd_set_altpictcmds(const struct vdecdd_str_unit *str_unit, + const struct vdec_str_configdata *str_configdata, + const struct vdec_str_opconfig *output_config, + const struct vxd_coreprops *coreprops, + const struct vxd_buffers *buffers, + unsigned int *pict_cmds); + +/* + * this has all that it needs to translate a Stream Unit for + * a picture into a transaction. + */ +void vxd_set_reconpictcmds(const struct vdecdd_str_unit *str_unit, + const struct vdec_str_configdata *str_configdata, + const struct vdec_str_opconfig *output_config, + const struct vxd_coreprops *coreprops, + const struct vxd_buffers *buffers, + unsigned int *pict_cmds); + +int vxd_getscalercmds(const struct scaler_config *scaler_config, + const struct scaler_pitch *pitch, + const struct scaler_filter *filter, + const struct pixel_pixinfo *out_loop_pixel_info, + struct scaler_params *params, + unsigned int *pict_cmds); + +/* + * this creates value of MSVDX_CMDS_CODED_PICTURE_SIZE register. + */ +unsigned int vxd_get_codedpicsize(unsigned short width_min1, unsigned short height_min1); + +/* + * return HW codec mode based on video standard. + */ +unsigned char vxd_get_codedmode(enum vdec_vid_std vidstd); + +/* + * translates core properties to the form of the struct vxd_coreprops struct. + */ +void vxd_get_coreproperties(void *hndl_coreproperties, + struct vxd_coreprops *vxd_coreprops); + +/* + * translates picture attributes to the form of the VXD_sPictAttrs struct. + */ +int vxd_get_pictattrs(unsigned int flags, struct vxd_pict_attrs *pict_attrs); + +/* + * translates message attributes to the form of the VXD_eMsgAttr struct. + */ +int vxd_get_msgerrattr(unsigned int flags, enum vxd_msg_attr *msg_attr); + +/* + * sets a message flag. + */ +int vxd_set_msgflag(enum vxd_msg_flag input_flag, unsigned int *flags); + +#endif /* _VXD_INT_H */ From patchwork Wed Aug 18 14:10:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0F4DC43214 for ; Wed, 18 Aug 2021 14:12:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8E137610D2 for ; Wed, 18 Aug 2021 14:12:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239379AbhHRONM (ORCPT ); Wed, 18 Aug 2021 10:13:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239214AbhHRONB (ORCPT ); Wed, 18 Aug 2021 10:13:01 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D031AC061764 for ; Wed, 18 Aug 2021 07:12:25 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id y23so2364566pgi.7 for ; Wed, 18 Aug 2021 07:12:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/sNuf5t+quOhrjta/GUadJ1tFtVyNFM0mWdDejN1J44=; b=eRLXUiUBh+S4fS1YM0U0Lc7mekPMZwHvhsEaYhaqcj+uNF9uOAwnS0ucp3aXNsB4tF z24r/oR0+3yXCC9Y2iPsAk2cQ4tui5ZcM1FzyqJg8+a7gNkbj8Pb1ZJv14nVkzYmVDAI bytX8sRH27oW8MbrVcbMy0HvU+10x+UZ+r+q4yOFseacYdZCU/miBv+xn1FKba5OOJj5 3P+bLRDxHjSouKzTAnzSs6Nc1ycmV5fz8nIj7dlgawF1EcxU8ifiTIx9RzXdbdns44wp Gk1A0VIsetiCAqj4bZuU68aEg5jmb2jUx8wANgcrjmJidp8Z8OiHg5oBsRSXdys99tGQ zltQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=/sNuf5t+quOhrjta/GUadJ1tFtVyNFM0mWdDejN1J44=; b=GvvqexeYW0vCaxuy9dl1x7OWOKEoVuCLN1RIEmX2j8BOxMOWmxkc2PesbHq3DYLODz 8RJ7b9nHDcQfAdDC57DZ+S9bR51ncXF1qCkWlcpy/nHNXB+vzK5UUUG1q1DaNjYJT494 FWu1oMqvpYalCxRutj1ce5g4PSBrc/+PfxxRXO+0tsTzlskdljS8Y6+r/N3+Dp25Nvy6 VRAZpK1QG1TScAM1oxqL3dGPLyncHl3Gpu1ddlKXDbL4mqaP2IEeKt14ai0C84AhfoGZ 2EISwSNZDVk3cifuLRF/2KYxDHGpesWe44/qkfoMLz/LYOUM6fUIyLe2ttUlRHOJACAo fI1w== MIME-Version: 1.0 X-Gm-Message-State: AOAM533bcZw7JQUlCfdf11olJY8498TwHfGcqgzRvgSPUxxvOYCvdfCn HCd092hVRYjQdcXwdJAsPsDCpRIkw9q47zXb70WZzbxd8zXmPWXdno48QNBbCkuhdeTbBLBCYVB GEwdKDuHzrO2W1Crg X-Google-Smtp-Source: ABdhPJyaEg2WvaDP4t7BkCZjE37ZG8NCeH4TMmkzpEQqCLcYzflg6Pl9EpzdwyEnIPiCQiyroNgsmw== X-Received: by 2002:a63:b47:: with SMTP id a7mr9171224pgl.181.1629295945220; Wed, 18 Aug 2021 07:12:25 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.12.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:12:24 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 10/30] v4l: vxd-dec: Add utility modules Date: Wed, 18 Aug 2021 19:40:17 +0530 Message-Id: <20210818141037.19990-11-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya Contains utility module for double linked queues, single linked lists and workqueue. Signed-off-by: Lakshmi Sankar Signed-off-by: Sidraya --- MAINTAINERS | 6 + drivers/staging/media/vxd/common/dq.c | 248 ++++++++++++++++++ drivers/staging/media/vxd/common/dq.h | 36 +++ drivers/staging/media/vxd/common/lst.c | 119 +++++++++ drivers/staging/media/vxd/common/lst.h | 37 +++ drivers/staging/media/vxd/common/work_queue.c | 188 +++++++++++++ drivers/staging/media/vxd/common/work_queue.h | 66 +++++ 7 files changed, 700 insertions(+) create mode 100644 drivers/staging/media/vxd/common/dq.c create mode 100644 drivers/staging/media/vxd/common/dq.h create mode 100644 drivers/staging/media/vxd/common/lst.c create mode 100644 drivers/staging/media/vxd/common/lst.h create mode 100644 drivers/staging/media/vxd/common/work_queue.c create mode 100644 drivers/staging/media/vxd/common/work_queue.h diff --git a/MAINTAINERS b/MAINTAINERS index 0468aaac3b7d..2668eeb89a34 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19537,6 +19537,8 @@ M: Sidraya Jayagond L: linux-media@vger.kernel.org S: Maintained F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml +F: drivers/staging/media/vxd/common/dq.c +F: drivers/staging/media/vxd/common/dq.h F: drivers/staging/media/vxd/common/idgen_api.c F: drivers/staging/media/vxd/common/idgen_api.h F: drivers/staging/media/vxd/common/img_mem_man.c @@ -19544,6 +19546,10 @@ F: drivers/staging/media/vxd/common/img_mem_man.h F: drivers/staging/media/vxd/common/img_mem_unified.c F: drivers/staging/media/vxd/common/imgmmu.c F: drivers/staging/media/vxd/common/imgmmu.h +F: drivers/staging/media/vxd/common/lst.c +F: drivers/staging/media/vxd/common/lst.h +F: drivers/staging/media/vxd/common/work_queue.c +F: drivers/staging/media/vxd/common/work_queue.h F: drivers/staging/media/vxd/decoder/hw_control.c F: drivers/staging/media/vxd/decoder/hw_control.h F: drivers/staging/media/vxd/decoder/img_dec_common.h diff --git a/drivers/staging/media/vxd/common/dq.c b/drivers/staging/media/vxd/common/dq.c new file mode 100644 index 000000000000..890be5ed00e7 --- /dev/null +++ b/drivers/staging/media/vxd/common/dq.c @@ -0,0 +1,248 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Utility module for doubly linked queues. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstreamimg + * Sidraya Jayagond + */ + +#include +#include +#include +#include +#include + +#include "dq.h" +#include "img_errors.h" + +void dq_init(struct dq_linkage_t *queue) +{ + queue->fwd = (struct dq_linkage_t *)queue; + queue->back = (struct dq_linkage_t *)queue; +} + +void dq_addhead(struct dq_linkage_t *queue, void *item) +{ + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd); + + if (!((struct dq_linkage_t *)queue)->back || + !((struct dq_linkage_t *)queue)->fwd) + return; + + ((struct dq_linkage_t *)item)->back = (struct dq_linkage_t *)queue; + ((struct dq_linkage_t *)item)->fwd = + ((struct dq_linkage_t *)queue)->fwd; + ((struct dq_linkage_t *)queue)->fwd->back = (struct dq_linkage_t *)item; + ((struct dq_linkage_t *)queue)->fwd = (struct dq_linkage_t *)item; +} + +void dq_addtail(struct dq_linkage_t *queue, void *item) +{ + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd); + + if (!((struct dq_linkage_t *)queue)->back || + !((struct dq_linkage_t *)queue)->fwd) + return; + + ((struct dq_linkage_t *)item)->fwd = (struct dq_linkage_t *)queue; + ((struct dq_linkage_t *)item)->back = + ((struct dq_linkage_t *)queue)->back; + ((struct dq_linkage_t *)queue)->back->fwd = (struct dq_linkage_t *)item; + ((struct dq_linkage_t *)queue)->back = (struct dq_linkage_t *)item; +} + +int dq_empty(struct dq_linkage_t *queue) +{ + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd); + + if (!((struct dq_linkage_t *)queue)->back || + !((struct dq_linkage_t *)queue)->fwd) + return 1; + + return ((queue)->fwd == (struct dq_linkage_t *)(queue)); +} + +void *dq_first(struct dq_linkage_t *queue) +{ + struct dq_linkage_t *temp = queue->fwd; + + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd); + + if (!((struct dq_linkage_t *)queue)->back || + !((struct dq_linkage_t *)queue)->fwd) + return NULL; + + return temp == (struct dq_linkage_t *)queue ? NULL : temp; +} + +void *dq_last(struct dq_linkage_t *queue) +{ + struct dq_linkage_t *temp = queue->back; + + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd); + + if (!((struct dq_linkage_t *)queue)->back || + !((struct dq_linkage_t *)queue)->fwd) + return NULL; + + return temp == (struct dq_linkage_t *)queue ? NULL : temp; +} + +void *dq_next(void *item) +{ + IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->fwd); + + if (!((struct dq_linkage_t *)item)->back || + !((struct dq_linkage_t *)item)->fwd) + return NULL; + + return ((struct dq_linkage_t *)item)->fwd; +} + +void *dq_previous(void *item) +{ + IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->fwd); + + if (!((struct dq_linkage_t *)item)->back || + !((struct dq_linkage_t *)item)->fwd) + return NULL; + + return ((struct dq_linkage_t *)item)->back; +} + +void dq_remove(void *item) +{ + IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)item)->fwd); + + if (!((struct dq_linkage_t *)item)->back || + !((struct dq_linkage_t *)item)->fwd) + return; + + ((struct dq_linkage_t *)item)->fwd->back = + ((struct dq_linkage_t *)item)->back; + ((struct dq_linkage_t *)item)->back->fwd = + ((struct dq_linkage_t *)item)->fwd; + + /* make item linkages safe for "orphan" removes */ + ((struct dq_linkage_t *)item)->fwd = item; + ((struct dq_linkage_t *)item)->back = item; +} + +void *dq_removehead(struct dq_linkage_t *queue) +{ + struct dq_linkage_t *temp; + + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd); + + if (!((struct dq_linkage_t *)queue)->back || + !((struct dq_linkage_t *)queue)->fwd) + return NULL; + + if ((queue)->fwd == (struct dq_linkage_t *)(queue)) + return NULL; + + temp = ((struct dq_linkage_t *)queue)->fwd; + temp->fwd->back = temp->back; + temp->back->fwd = temp->fwd; + + /* make item linkages safe for "orphan" removes */ + temp->fwd = temp; + temp->back = temp; + return temp; +} + +void *dq_removetail(struct dq_linkage_t *queue) +{ + struct dq_linkage_t *temp; + + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)queue)->fwd); + + if (!((struct dq_linkage_t *)queue)->back || + !((struct dq_linkage_t *)queue)->fwd) + return NULL; + + if ((queue)->fwd == (struct dq_linkage_t *)(queue)) + return NULL; + + temp = ((struct dq_linkage_t *)queue)->back; + temp->fwd->back = temp->back; + temp->back->fwd = temp->fwd; + + /* make item linkages safe for "orphan" removes */ + temp->fwd = temp; + temp->back = temp; + + return temp; +} + +void dq_addbefore(void *successor, void *item) +{ + IMG_DBG_ASSERT(((struct dq_linkage_t *)successor)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)successor)->fwd); + + if (!((struct dq_linkage_t *)successor)->back || + !((struct dq_linkage_t *)successor)->fwd) + return; + + ((struct dq_linkage_t *)item)->fwd = (struct dq_linkage_t *)successor; + ((struct dq_linkage_t *)item)->back = + ((struct dq_linkage_t *)successor)->back; + ((struct dq_linkage_t *)item)->back->fwd = (struct dq_linkage_t *)item; + ((struct dq_linkage_t *)successor)->back = (struct dq_linkage_t *)item; +} + +void dq_addafter(void *predecessor, void *item) +{ + IMG_DBG_ASSERT(((struct dq_linkage_t *)predecessor)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)predecessor)->fwd); + + if (!((struct dq_linkage_t *)predecessor)->back || + !((struct dq_linkage_t *)predecessor)->fwd) + return; + + ((struct dq_linkage_t *)item)->fwd = + ((struct dq_linkage_t *)predecessor)->fwd; + ((struct dq_linkage_t *)item)->back = + (struct dq_linkage_t *)predecessor; + ((struct dq_linkage_t *)item)->fwd->back = (struct dq_linkage_t *)item; + ((struct dq_linkage_t *)predecessor)->fwd = (struct dq_linkage_t *)item; +} + +void dq_move(struct dq_linkage_t *from, struct dq_linkage_t *to) +{ + IMG_DBG_ASSERT(((struct dq_linkage_t *)from)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)from)->fwd); + IMG_DBG_ASSERT(((struct dq_linkage_t *)to)->back); + IMG_DBG_ASSERT(((struct dq_linkage_t *)to)->fwd); + + if (!((struct dq_linkage_t *)from)->back || + !((struct dq_linkage_t *)from)->fwd || + !((struct dq_linkage_t *)to)->back || + !((struct dq_linkage_t *)to)->fwd) + return; + + if ((from)->fwd == (struct dq_linkage_t *)(from)) { + dq_init(to); + } else { + *to = *from; + to->fwd->back = (struct dq_linkage_t *)to; + to->back->fwd = (struct dq_linkage_t *)to; + dq_init(from); + } +} diff --git a/drivers/staging/media/vxd/common/dq.h b/drivers/staging/media/vxd/common/dq.h new file mode 100644 index 000000000000..4663a92aaf7a --- /dev/null +++ b/drivers/staging/media/vxd/common/dq.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Utility module for doubly linked queues. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + */ +#ifndef DQ_H +#define DQ_H + +/* dq structure */ +struct dq_linkage_t { + struct dq_linkage_t *fwd; + struct dq_linkage_t *back; +}; + +/* Function Prototypes */ +void dq_addafter(void *predecessor, void *item); +void dq_addbefore(void *successor, void *item); +void dq_addhead(struct dq_linkage_t *queue, void *item); +void dq_addtail(struct dq_linkage_t *queue, void *item); +int dq_empty(struct dq_linkage_t *queue); +void *dq_first(struct dq_linkage_t *queue); +void *dq_last(struct dq_linkage_t *queue); +void dq_init(struct dq_linkage_t *queue); +void dq_move(struct dq_linkage_t *from, struct dq_linkage_t *to); +void *dq_next(void *item); +void *dq_previous(void *item); +void dq_remove(void *item); +void *dq_removehead(struct dq_linkage_t *queue); +void *dq_removetail(struct dq_linkage_t *queue); + +#endif /* #define DQ_H */ diff --git a/drivers/staging/media/vxd/common/lst.c b/drivers/staging/media/vxd/common/lst.c new file mode 100644 index 000000000000..bb047ab6d598 --- /dev/null +++ b/drivers/staging/media/vxd/common/lst.c @@ -0,0 +1,119 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * List processing primitives. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Author: + * Lakshmi Sankar + */ + +#include "lst.h" + +#ifndef NULL +#define NULL ((void *)0) +#endif + +void lst_add(struct lst_t *list, void *item) +{ + if (!list->first) { + list->first = item; + list->last = item; + } else { + *list->last = item; + list->last = item; + } + *((void **)item) = NULL; +} + +void lst_addhead(struct lst_t *list, void *item) +{ + if (!list->first) { + list->first = item; + list->last = item; + *((void **)item) = NULL; + } else { + *((void **)item) = list->first; + list->first = item; + } +} + +int lst_empty(struct lst_t *list) +{ + if (!list->first) + return 1; + else + return 0; +} + +void *lst_first(struct lst_t *list) +{ + return list->first; +} + +void lst_init(struct lst_t *list) +{ + list->first = NULL; + list->last = NULL; +} + +void *lst_last(struct lst_t *list) +{ + return list->last; +} + +void *lst_next(void *item) +{ + return *((void **)item); +} + +void *lst_removehead(struct lst_t *list) +{ + void **temp = list->first; + + if (temp) { + list->first = *temp; + if (!list->first) + list->last = NULL; + } + return temp; +} + +void *lst_remove(struct lst_t *list, void *item) +{ + void **p; + void **q; + + p = (void **)list; + q = *p; + while (q) { + if (q == item) { + *p = *q; + if (list->last == q) + list->last = p; + return item; + } + p = q; + q = *p; + } + + return NULL; +} + +int lst_check(struct lst_t *list, void *item) +{ + void **p; + void **q; + + p = (void **)list; + q = *p; + while (q) { + if (q == item) + return 1; + p = q; + q = *p; + } + + return 0; +} diff --git a/drivers/staging/media/vxd/common/lst.h b/drivers/staging/media/vxd/common/lst.h new file mode 100644 index 000000000000..ccf6eed19019 --- /dev/null +++ b/drivers/staging/media/vxd/common/lst.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * List processing primitives. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Author: + * Lakshmi Sankar + */ +#ifndef __LIST_H__ +#define __LIST_H__ + +#include + +struct lst_t { + void **first; + void **last; +}; + +void lst_add(struct lst_t *list, void *item); +void lst_addhead(struct lst_t *list, void *item); + +/** + * lst_empty- Is list empty? + * @list: pointer to list + */ +int lst_empty(struct lst_t *list); +void *lst_first(struct lst_t *list); +void lst_init(struct lst_t *list); +void *lst_last(struct lst_t *list); +void *lst_next(void *item); +void *lst_remove(struct lst_t *list, void *item); +void *lst_removehead(struct lst_t *list); +int lst_check(struct lst_t *list, void *item); + +#endif /* __LIST_H__ */ diff --git a/drivers/staging/media/vxd/common/work_queue.c b/drivers/staging/media/vxd/common/work_queue.c new file mode 100644 index 000000000000..6bd91a7fdbf4 --- /dev/null +++ b/drivers/staging/media/vxd/common/work_queue.c @@ -0,0 +1,188 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Work Queue Handling for Linux + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Prashanth Kumar Amai + */ + +#include +#include +#include + +#include "work_queue.h" + +/* Defining and initilizing mutex + */ +DEFINE_MUTEX(mutex); + +#define false 0 +#define true 1 + +struct node { + void **key; + struct node *next; +}; + +struct node *work_head; +struct node *delayed_work_head; + +void init_work(void **work_args, void *work_fn, uint8_t hwa_id) +{ + struct work_struct **work = (struct work_struct **)work_args; + //create a link + struct node *link = kmalloc(sizeof(*link), GFP_KERNEL); + + *work = kzalloc(sizeof(*work), GFP_KERNEL); + if (!(*work)) { + pr_err("Memory allocation failed for work_queue\n"); + return; + } + INIT_WORK(*work, work_fn); + + link->key = (void **)work; + mutex_lock(&mutex); + //point it to old first node + link->next = work_head; + + //point first to new first node + work_head = link; + mutex_unlock(&mutex); +} + +void init_delayed_work(void **work_args, void *work_fn, uint8_t hwa_id) +{ + struct delayed_work **work = (struct delayed_work **)work_args; + //create a link + struct node *link = kmalloc(sizeof(*link), GFP_KERNEL); + + *work = kzalloc(sizeof(*work), GFP_KERNEL); + if (!(*work)) { + pr_err("Memory allocation failed for delayed_work_queue\n"); + return; + } + INIT_DELAYED_WORK(*work, work_fn); + + link->key = (void **)work; + mutex_lock(&mutex); + //point it to old first node + link->next = delayed_work_head; + + //point first to new first node + delayed_work_head = link; + mutex_unlock(&mutex); +} + +/** + * get_work_buff - get_work_buff + * @key: key value + * @flag: flag + */ + +void *get_work_buff(void *key, signed char flag) +{ + struct node *data = NULL; + void *work_new = NULL; + struct node *temp = NULL; + struct node *previous = NULL; + struct work_struct **work = NULL; + + //start from the first link + mutex_lock(&mutex); + temp = work_head; + + //if list is empty + if (!work_head) { + mutex_unlock(&mutex); + return NULL; + } + + work = ((struct work_struct **)(temp->key)); + //navigate through list + while (*work != key) { + //if it is last node + if (!temp->next) { + mutex_unlock(&mutex); + return NULL; + } + //store reference to current link + previous = temp; + //move to next link + temp = temp->next; + work = ((struct work_struct **)(temp->key)); + } + + if (flag) { + //found a match, update the link + if (temp == work_head) { + //change first to point to next link + work_head = work_head->next; + } else { + //bypass the current link + previous->next = temp->next; + } + } + + mutex_unlock(&mutex); + //return temp; + data = temp; + if (data) { + work_new = data->key; + if (flag) + kfree(data); + } + return work_new; +} + +void *get_delayed_work_buff(void *key, signed char flag) +{ + struct node *data = NULL; + void *dwork_new = NULL; + struct node *temp = NULL; + struct node *previous = NULL; + struct delayed_work **dwork = NULL; + + if (flag) { + /* This Condition is true when kernel module is removed */ + return delayed_work_head; + } + //start from the first link + mutex_lock(&mutex); + temp = delayed_work_head; + + //if list is empty + if (!delayed_work_head) { + mutex_unlock(&mutex); + return NULL; + } + + dwork = ((struct delayed_work **)(temp->key)); + //navigate through list + while (&(*dwork)->work != key) { + //if it is last node + if (!temp->next) { + mutex_unlock(&mutex); + return NULL; + } + //store reference to current link + previous = temp; + //move to next link + temp = temp->next; + dwork = ((struct delayed_work **)(temp->key)); + } + + mutex_unlock(&mutex); + data = temp; + if (data) { + dwork_new = data->key; + if (flag) + kfree(data); + } + return dwork_new; +} diff --git a/drivers/staging/media/vxd/common/work_queue.h b/drivers/staging/media/vxd/common/work_queue.h new file mode 100644 index 000000000000..44ed423334e2 --- /dev/null +++ b/drivers/staging/media/vxd/common/work_queue.h @@ -0,0 +1,66 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Work Queue Related Definitions + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Prashanth Kumar Amai + */ + +#ifndef WORKQUEUE_H_ +#define WORKQUEUE_H_ + +#include + +enum { + HWA_DECODER = 0, + HWA_ENCODER = 1, + HWA_FORCE32BITS = 0x7FFFFFFFU +}; + +/* + * init_work - This function provides the necessary initialization + * and saving given pointer(work_args) in linked list. + * @work_args: structure for the initialization + * @work_fn: work function pointer + * + * This function provides the necessary initialization + * and setting of the handler function (passed by the user). + */ +void init_work(void **work_args, void *work_fn, uint8_t hwa_id); + +/* + * init_delayed_work - This function provides the necessary initialization. + * and saving given pointer(work_args) in linked list. + * @work_args: structure for the initialization + * @work_fn: work function pointer + * + * This function provides the necessary initialization + * and setting of the handler function (passed by the user). + */ +void init_delayed_work(void **work_args, void *work_fn, uint8_t hwa_id); + +/* + * get_delayed_work_buff - This function return base address of given pointer + * @key: The given work struct pointer + * @flag: If TRUE, delete the node from the linked list. + * + * Return: Base address of the given input buffer. + */ +void *get_delayed_work_buff(void *key, signed char flag); + +/** + * get_work_buff - This function return base address of given pointer + * @key: The given work struct pointer + * @flag: If TRUE, delete the node from the linked list. + * + * Return: Base address of the given input buffer. + */ +void *get_work_buff(void *key, signed char flag); + +#endif /* WORKQUEUE_H_ */ From patchwork Wed Aug 18 14:10:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46181C432BE for ; Wed, 18 Aug 2021 14:12:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 167B7610CB for ; Wed, 18 Aug 2021 14:12:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239216AbhHRONT (ORCPT ); Wed, 18 Aug 2021 10:13:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239310AbhHRONG (ORCPT ); Wed, 18 Aug 2021 10:13:06 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56316C0617A8 for ; Wed, 18 Aug 2021 07:12:30 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id e7so2362311pgk.2 for ; Wed, 18 Aug 2021 07:12:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XHV87XAAsVIjAQvTwVz0+xSjzNyvvg67bwT4OOQ14/w=; b=pLZlQXeBEhIKhx+L6EtJKOeVGeRr8GkFzAx2PLPmBdYntsQe5qEGeHyBl4hFKhPsQZ jko9x4BBCpZsQZm6NCJcgxICYsh+Zs6oUDOXZL2g6T0Ue6bfww5MPwYJd3mEOIWovNbg 7qns0YvHAjYP6V6JlEvMvahwYp7QXlw36o5KBOEwQS5ZQnTjmizuBhhNoBiSGxrar6kG 7MDhzN6Pbbwh0e3H+aGQ4hXRADGeH2Chn5Q7tBHB3y+eG4DYgd7mufAZmndRelwKyIBk zevylmpLrXRZY0t7FTyfe3LJQhouGr6kdWUgLuFDaG7PYLL7Xf2Scu9J99wzDFqqTV1q sLjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=XHV87XAAsVIjAQvTwVz0+xSjzNyvvg67bwT4OOQ14/w=; b=bO2djQM/WCZg32BTJwKpHUYYeYrFtpKKoeH+GNdS3uRyLFULxQSdal21PyOSYakp40 HH3Z06/kittcnFAcZ3ApRIzkyQPDUHYM6s7JfEpkuSiPWjIbK5yhSfbU6xUtB+QqWJVQ zvAl51bcLbVa0cK+5KjNhHV6sgU8flC/ONKlbBmemsQLssSuKx9rEGBGPQL1hpCYifL1 3Th7VakKeswv6zbd2+78eV9AdTTwdfCfeVLCJI30EPNoyorT9zvh86eNv/W9xV1pRGqk Zj59PSQGXIJ579ZcM8UAQvzxOtmHbETo17YMQBF/svYC+byaM9jhIC9E3dPgAylx/2Of B0jg== MIME-Version: 1.0 X-Gm-Message-State: AOAM533XSC05108eTpKwm3SzqP1A8o6fhN6LkoafVF/AhmGySy38VBDp BAVhPtKnH/dQGZmVcB9lkvs/C7aVpXI2Sjh9VPLjjE0yYXZfFDl+8cgdW9LWp3kxDN1ozUbAHH/ aGRtN6cS/8xj/5GSI X-Google-Smtp-Source: ABdhPJyKWkae4XMWkzOqMgv4tBHgl58BmL+V8dgLWcAo7J/+JkjLnmb+Dpwf6CO/hT/f3Cjd2atvkg== X-Received: by 2002:a65:448a:: with SMTP id l10mr9059769pgq.313.1629295948818; Wed, 18 Aug 2021 07:12:28 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.12.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:12:28 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 11/30] v4l: vxd-dec: Add TALMMU module Date: Wed, 18 Aug 2021 19:40:18 +0530 Message-Id: <20210818141037.19990-12-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya It contains the implementation of Address allocation management APIs, list processing primitives, generic resource allocations, self scaling has tables and object pool memory allocator which are needed for TALMMU functionality Signed-off-by: Lakshmi Sankar Signed-off-by: Sidraya --- MAINTAINERS | 10 + drivers/staging/media/vxd/common/addr_alloc.c | 499 +++++++++ drivers/staging/media/vxd/common/addr_alloc.h | 238 +++++ drivers/staging/media/vxd/common/hash.c | 481 +++++++++ drivers/staging/media/vxd/common/hash.h | 86 ++ drivers/staging/media/vxd/common/pool.c | 228 ++++ drivers/staging/media/vxd/common/pool.h | 66 ++ drivers/staging/media/vxd/common/ra.c | 972 ++++++++++++++++++ drivers/staging/media/vxd/common/ra.h | 200 ++++ drivers/staging/media/vxd/common/talmmu_api.c | 753 ++++++++++++++ drivers/staging/media/vxd/common/talmmu_api.h | 246 +++++ 11 files changed, 3779 insertions(+) create mode 100644 drivers/staging/media/vxd/common/addr_alloc.c create mode 100644 drivers/staging/media/vxd/common/addr_alloc.h create mode 100644 drivers/staging/media/vxd/common/hash.c create mode 100644 drivers/staging/media/vxd/common/hash.h create mode 100644 drivers/staging/media/vxd/common/pool.c create mode 100644 drivers/staging/media/vxd/common/pool.h create mode 100644 drivers/staging/media/vxd/common/ra.c create mode 100644 drivers/staging/media/vxd/common/ra.h create mode 100644 drivers/staging/media/vxd/common/talmmu_api.c create mode 100644 drivers/staging/media/vxd/common/talmmu_api.h diff --git a/MAINTAINERS b/MAINTAINERS index 2668eeb89a34..2b0d0708d852 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19537,8 +19537,12 @@ M: Sidraya Jayagond L: linux-media@vger.kernel.org S: Maintained F: Documentation/devicetree/bindings/media/img,d5520-vxd.yaml +F: drivers/staging/media/vxd/common/addr_alloc.c +F: drivers/staging/media/vxd/common/addr_alloc.h F: drivers/staging/media/vxd/common/dq.c F: drivers/staging/media/vxd/common/dq.h +F: drivers/staging/media/vxd/common/hash.c +F: drivers/staging/media/vxd/common/hash.h F: drivers/staging/media/vxd/common/idgen_api.c F: drivers/staging/media/vxd/common/idgen_api.h F: drivers/staging/media/vxd/common/img_mem_man.c @@ -19548,6 +19552,12 @@ F: drivers/staging/media/vxd/common/imgmmu.c F: drivers/staging/media/vxd/common/imgmmu.h F: drivers/staging/media/vxd/common/lst.c F: drivers/staging/media/vxd/common/lst.h +F: drivers/staging/media/vxd/common/pool.c +F: drivers/staging/media/vxd/common/pool.h +F: drivers/staging/media/vxd/common/ra.c +F: drivers/staging/media/vxd/common/ra.h +F: drivers/staging/media/vxd/common/talmmu_api.c +F: drivers/staging/media/vxd/common/talmmu_api.h F: drivers/staging/media/vxd/common/work_queue.c F: drivers/staging/media/vxd/common/work_queue.h F: drivers/staging/media/vxd/decoder/hw_control.c diff --git a/drivers/staging/media/vxd/common/addr_alloc.c b/drivers/staging/media/vxd/common/addr_alloc.c new file mode 100644 index 000000000000..393d309b2c0c --- /dev/null +++ b/drivers/staging/media/vxd/common/addr_alloc.c @@ -0,0 +1,499 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Address allocation APIs - used to manage address allocation + * with a number of predefined regions. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Sidraya Jayagond + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "addr_alloc.h" +#include "hash.h" +#include "img_errors.h" + +/* Global context. */ +static struct addr_context global_ctx = {0}; +/* Sub-system initialized. */ +static int global_initialized; +/* Count of contexts. */ +static unsigned int num_ctx; +/* Global mutex */ +static struct mutex *global_lock; + +/** + * addr_initialise - addr_initialise + */ + +int addr_initialise(void) +{ + unsigned int result = IMG_ERROR_ALREADY_INITIALISED; + + /* If we are not initialized */ + if (!global_initialized) + result = addr_cx_initialise(&global_ctx); + return result; +} + +int addr_cx_initialise(struct addr_context * const context) +{ + unsigned int result = IMG_ERROR_FATAL; + + if (!context) + return IMG_ERROR_INVALID_PARAMETERS; + + if (!global_initialized) { + /* Initialise context */ + memset(context, 0x00, sizeof(struct addr_context)); + + /* If no mutex associated with this resource */ + if (!global_lock) { + /* Create one */ + + global_lock = kzalloc(sizeof(*global_lock), GFP_KERNEL); + if (!global_lock) + return -ENOMEM; + + mutex_init(global_lock); + } + + mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC); + + /* Initialise the hash functions. */ + result = vid_hash_initialise(); + if (result != IMG_SUCCESS) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + + /* Initialise the arena functions */ + result = vid_ra_initialise(); + if (result != IMG_SUCCESS) { + mutex_unlock(global_lock); + result = vid_hash_finalise(); + return IMG_ERROR_UNEXPECTED_STATE; + } + + /* We are now initialized */ + global_initialized = TRUE; + result = IMG_SUCCESS; + } else { + mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC); + } + + num_ctx++; + mutex_unlock(global_lock); + + return result; +} + +int addr_deinitialise(void) +{ + return addr_cx_deinitialise(&global_ctx); +} + +int addr_cx_deinitialise(struct addr_context * const context) +{ + struct addr_region *tmp_region = NULL; + unsigned int result = IMG_ERROR_FATAL; + + if (!context) + return IMG_ERROR_INVALID_PARAMETERS; + + if (global_initialized) { + mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC); + + tmp_region = context->regions; + + /* Delete all arena structure */ + if (context->default_region) + result = vid_ra_delete(context->default_region->arena); + + while (tmp_region) { + result = vid_ra_delete(tmp_region->arena); + tmp_region = tmp_region->nxt_region; + } + + if (num_ctx != 0) + num_ctx--; + + result = IMG_SUCCESS; + if (num_ctx == 0) { + /* Free off resources */ + result = vid_hash_finalise(); + result = vid_ra_deinit(); + global_initialized = FALSE; + + mutex_unlock(global_lock); + mutex_destroy(global_lock); + kfree(global_lock); + global_lock = NULL; + } else { + mutex_unlock(global_lock); + } + } + + return result; +} + +int addr_define_mem_region(struct addr_region * const region) +{ + return addr_cx_define_mem_region(&global_ctx, region); +} + +int addr_cx_define_mem_region(struct addr_context * const context, + struct addr_region * const region) +{ + struct addr_region *tmp_region = NULL; + unsigned int result = IMG_SUCCESS; + + if (!context || !region) + return IMG_ERROR_INVALID_PARAMETERS; + + mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC); + + tmp_region = context->regions; + + /* Ensure the link to the next is NULL */ + region->nxt_region = NULL; + + /* If this is the default memory region */ + if (!region->name) { + /* Should not previously have been defined */ + if (context->default_region) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + + context->default_region = region; + context->no_regions++; + + /* + * Create an arena for memory allocation + * name of resource arena for debug + * start of resource + * size of resource + * allocation quantum + * import allocator + * import deallocator + * import handle + */ + result = vid_ra_create("memory", + region->base_addr, + region->size, + 1, + NULL, + NULL, + NULL, + ®ion->arena); + + if (result != IMG_SUCCESS) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + } else { + /* + * Run down the list of existing named regions + * to check if there is a region with this name + */ + while (tmp_region && + (strcmp(region->name, tmp_region->name) != 0) && + tmp_region->nxt_region) { + tmp_region = tmp_region->nxt_region; + } + + /* If we have items in the list */ + if (tmp_region) { + /* + * Check we didn't stop because the name + * clashes with one already defined. + */ + + if (strcmp(region->name, tmp_region->name) == 0 || + tmp_region->nxt_region) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + + /* Add to end of list */ + tmp_region->nxt_region = region; + } else { + /* Add to head of list */ + context->regions = region; + } + + context->no_regions++; + + /* + * Create an arena for memory allocation + * name of resource arena for debug + * start of resource + * size of resource + * allocation quantum + * import allocator + * import deallocator + * import handle + */ + result = vid_ra_create(region->name, + region->base_addr, + region->size, + 1, + NULL, + NULL, + NULL, + ®ion->arena); + + if (result != IMG_SUCCESS) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + } + + mutex_unlock(global_lock); + + /* Check the arean was created OK */ + if (!region->arena) + return IMG_ERROR_UNEXPECTED_STATE; + + return result; +} + +int addr_malloc(const unsigned char * const name, + unsigned long long size, + unsigned long long * const base_adr) +{ + return addr_cx_malloc(&global_ctx, name, size, base_adr); +} + +int addr_cx_malloc(struct addr_context * const context, + const unsigned char * const name, + unsigned long long size, + unsigned long long * const base_adr) +{ + unsigned int result = IMG_ERROR_FATAL; + struct addr_region *tmp_region = NULL; + + if (!context || !base_adr || !name) + return IMG_ERROR_INVALID_PARAMETERS; + + *(base_adr) = (unsigned long long)-1LL; + + mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC); + + tmp_region = context->regions; + + /* + * Run down the list of existing named + * regions to locate this + */ + while (tmp_region && (strcmp(name, tmp_region->name) != 0) && (tmp_region->nxt_region)) + tmp_region = tmp_region->nxt_region; + + /* If there was no match. */ + if (!tmp_region || (strcmp(name, tmp_region->name) != 0)) { + /* Use the default */ + if (!context->default_region) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + + tmp_region = context->default_region; + } + + if (!tmp_region) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + + /* Allocate size + guard band */ + result = vid_ra_alloc(tmp_region->arena, + size + tmp_region->guard_band, + NULL, + NULL, + SEQUENTIAL_ALLOCATION, + 1, + base_adr); + if (result != IMG_SUCCESS) { + mutex_unlock(global_lock); + return IMG_ERROR_OUT_OF_MEMORY; + } + + mutex_unlock(global_lock); + + return result; +} + +int addr_cx_malloc_res(struct addr_context * const context, + const unsigned char * const name, + unsigned long long size, + unsigned long long * const base_adr) +{ + unsigned int result = IMG_ERROR_FATAL; + struct addr_region *tmp_region = NULL; + + if (!context || !base_adr || !name) + return IMG_ERROR_INVALID_PARAMETERS; + + mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC); + + tmp_region = context->regions; + /* If the allocation is for the default region */ + /* + * Run down the list of existing named + * regions to locate this + */ + while (tmp_region && (strcmp(name, tmp_region->name) != 0) && (tmp_region->nxt_region)) + tmp_region = tmp_region->nxt_region; + + /* If there was no match. */ + if (!tmp_region || (strcmp(name, tmp_region->name) != 0)) { + /* Use the default */ + if (!context->default_region) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + tmp_region = context->default_region; + } + if (!tmp_region) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + /* Allocate size + guard band */ + result = vid_ra_alloc(tmp_region->arena, size + tmp_region->guard_band, + NULL, NULL, SEQUENTIAL_ALLOCATION, 1, base_adr); + if (result != IMG_SUCCESS) { + mutex_unlock(global_lock); + return IMG_ERROR_OUT_OF_MEMORY; + } + mutex_unlock(global_lock); + + return result; +} + +int addr_cx_malloc_align_res(struct addr_context * const context, + const unsigned char * const name, + unsigned long long size, + unsigned long long alignment, + unsigned long long * const base_adr) +{ + unsigned int result; + struct addr_region *tmp_region = NULL; + + if (!context || !base_adr || !name) + return IMG_ERROR_INVALID_PARAMETERS; + + mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC); + + tmp_region = context->regions; + + /* + * Run down the list of existing named + * regions to locate this + */ + while (tmp_region && + (strcmp(name, tmp_region->name) != 0) && + (tmp_region->nxt_region)) { + tmp_region = tmp_region->nxt_region; + } + /* If there was no match. */ + if (!tmp_region || + (strcmp(name, tmp_region->name) != 0)) { + /* Use the default */ + if (!context->default_region) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + + tmp_region = context->default_region; + } + + if (!tmp_region) { + mutex_unlock(global_lock); + return IMG_ERROR_UNEXPECTED_STATE; + } + /* Allocate size + guard band */ + result = vid_ra_alloc(tmp_region->arena, + size + tmp_region->guard_band, + NULL, + NULL, + SEQUENTIAL_ALLOCATION, + alignment, + base_adr); + if (result != IMG_SUCCESS) { + mutex_unlock(global_lock); + return IMG_ERROR_OUT_OF_MEMORY; + } + + mutex_unlock(global_lock); + + return result; +} + +int addr_free(const unsigned char * const name, unsigned long long addr) +{ + return addr_cx_free(&global_ctx, name, addr); +} + +int addr_cx_free(struct addr_context * const context, + const unsigned char * const name, + unsigned long long addr) +{ + struct addr_region *tmp_region; + unsigned int result; + + if (!context) + return IMG_ERROR_INVALID_PARAMETERS; + + tmp_region = context->regions; + + mutex_lock_nested(global_lock, SUBCLASS_ADDR_ALLOC); + + /* If the allocation is for the default region */ + if (!name) { + if (!context->default_region) { + result = IMG_ERROR_INVALID_PARAMETERS; + goto error; + } + tmp_region = context->default_region; + } else { + /* + * Run down the list of existing named + * regions to locate this + */ + while (tmp_region && + (strcmp(name, tmp_region->name) != 0) && + tmp_region->nxt_region) { + tmp_region = tmp_region->nxt_region; + } + + /* If there was no match */ + if (!tmp_region || (strcmp(name, tmp_region->name) != 0)) { + /* Use the default */ + if (!context->default_region) { + result = IMG_ERROR_INVALID_PARAMETERS; + goto error; + } + tmp_region = context->default_region; + } + } + + /* Free the address */ + result = vid_ra_free(tmp_region->arena, addr); + +error: + mutex_unlock(global_lock); + return result; +} diff --git a/drivers/staging/media/vxd/common/addr_alloc.h b/drivers/staging/media/vxd/common/addr_alloc.h new file mode 100644 index 000000000000..387418b124e4 --- /dev/null +++ b/drivers/staging/media/vxd/common/addr_alloc.h @@ -0,0 +1,238 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Address allocation management API. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Sidraya Jayagond + */ +#ifndef __ADDR_ALLOC_H__ +#define __ADDR_ALLOC_H__ + +#include +#include "ra.h" + +/* Defines whether sequential or random allocation is used */ +enum { + SEQUENTIAL_ALLOCATION, + RANDOM_ALLOCATION, + RANDOM_FORCE32BITS = 0x7FFFFFFFU +}; + +/** + * struct addr_region - Memory region structure + *@name: A pointer to a sring containing the name of the region. + * NULL for the default memory region. + *@base_addr: The base address of the memory region. + *@size: The size of the memory region. + *@guard_band: The size of any guard band to be used. + * Guard bands can be useful in separating block allocations + * and allows the caller to detect erroneous accesses + * into these areas. + *@nxt_region:Used internally by the ADDR API.A pointer used to point + * to the next memory region. + *@arena: Used internally by the ADDR API. A to a structure used to + * maintain and perform address allocation. + * + * This structure contains information about the memory region. + */ +struct addr_region { + unsigned char *name; + unsigned long long base_addr; + unsigned long long size; + unsigned int guard_band; + struct addr_region *nxt_region; + void *arena; +}; + +/* + * This structure contains the context for allocation. + *@regions: Pointer the first region in the list. + *@default_region: Pointer the default region. + *@no_regions: Number of regions currently available (including default) + */ +struct addr_context { + struct addr_region *regions; + struct addr_region *default_region; + unsigned int no_regions; +}; + +/* + * @Function ADDR_Initialise + * @Description + * This function is used to initialise the address alocation sub-system. + * NOTE: This function may be called multiple times. The initialisation only + * happens the first time it is called. + * @Return IMG_SUCCESS or an error code. + */ +int addr_initialise(void); + +/* + * @Function addr_deinitialise + * @Description + * This function is used to de-initialise the address alocation sub-system. + * @Return IMG_SUCCESS or an error code. + */ +int addr_deinitialise(void); + +/* + * @Function addr_define_mem_region + * @Description + * This function is used define a memory region. + * NOTE: The region structure MUST be defined in static memory as this + * is retained and used by the ADDR sub-system. + * NOTE: Invalid parameters are trapped by asserts. + * @Input region: A pointer to a region structure. + * @Return IMG_RESULT : IMG_SUCCESS or an error code. + */ +int addr_define_mem_region(struct addr_region * const region); + +/* + * @Function addr_malloc + * @Description + * This function is used allocate space within a memory region. + * NOTE: Allocation failures or invalid parameters are trapped by asserts. + * @Input name: Is a pointer the name of the memory region. + * NULL can be used to allocate space from the + * default memory region. + * @Input size: The size (in bytes) of the allocation. + * @Output base_adr : The address of the allocated space. + * @Return IMG_SUCCESS or an error code. + */ +int addr_malloc(const unsigned char *const name, + unsigned long long size, + unsigned long long *const base_adr); + +/* + * @Function addr_free + * @Description + * This function is used free a previously allocate space within + * a memory region. + * NOTE: Invalid parameters are trapped by asserts. + * @Input name: Is a pointer to the name of the memory region. + * NULL is used to free space from the default memory region. + *@Input addr: The address allocated. + *@Return IMG_SUCCESS or an error code. + */ +int addr_free(const unsigned char * const name, unsigned long long addr); + +/* + * @Function addr_cx_initialise + * @Description + * This function is used to initialise the address allocation sub-system with + * an external context structure. + * NOTE: This function should be call only once for the context. + * @Input context : Pointer to context structure. + * @Return IMG_SUCCESS or an error code. + */ +int addr_cx_initialise(struct addr_context * const context); + +/* + * @Function addr_cx_deinitialise + * @Description + * This function is used to de-initialise the address allocation + * sub-system with an external context structure. + * @Input context : Pointer to context structure. + * @Return IMG_SUCCESS or an error code. + */ +int addr_cx_deinitialise(struct addr_context * const context); + +/* + * @Function addr_cx_define_mem_region + * @Description + * This function is used define a memory region with an external + * context structure. + * NOTE: The region structure MUST be defined in static memory as this + * is retained and used by the ADDR sub-system. + * NOTE: Invalid parameters are trapped by asserts. + * @Input context : Pointer to context structure. + * @Input region : A pointer to a region structure. + * @Return IMG_SUCCESS or an error code. + */ +int addr_cx_define_mem_region(struct addr_context *const context, + struct addr_region *const region); + +/* + * @Function addr_cx_malloc + * @Description + * This function is used allocate space within a memory region with + * an external context structure. + * NOTE: Allocation failures or invalid parameters are trapped by asserts. + * @Input context : Pointer to context structure. + * @Input name : Is a pointer the name of the memory region. + * NULL can be used to allocate space from the + * default memory region. + * @Input size : The size (in bytes) of the allocation. + * @Output base_adr : The address of the allocated space. + * @Return IMG_SUCCESS or an error code. + */ +int addr_cx_malloc(struct addr_context * const context, + const unsigned char *const name, + unsigned long long size, + unsigned long long *const base_adr); + +/* + * @Function addr_cx_malloc_res + * @Description + * This function is used allocate space within a memory region with + * an external context structure. + * NOTE: Allocation failures are returned in IMG_RESULT, however invalid + * parameters are trapped by asserts. + * @Input context : Pointer to context structure. + * @Input name : Is a pointer the name of the memory region. + * NULL can be used to allocate space from the + * default memory region. + * @Input size : The size (in bytes) of the allocation. + * @Input base_adr : Pointer to the address of the allocated space. + * @Return IMG_SUCCESS or an error code. + */ +int addr_cx_malloc_res(struct addr_context *const context, + const unsigned char *const name, + unsigned long long size, + unsigned long long * const base_adr); + +/* + * @Function addr_cx_malloc1_res + * @Description + * This function is used allocate space within a memory region with + * an external context structure. + * NOTE: Allocation failures are returned in IMG_RESULT, however invalid + * parameters are trapped by asserts. + * @Input context : Pointer to context structure. + * @Input name : Is a pointer the name of the memory region. + * NULL can be used to allocate space from the + * default memory region. + * @Input size : The size (in bytes) of the allocation. + * @Input alignment : The required byte alignment (1, 2, 4, 8, 16 etc). + * @Input base_adr : Pointer to the address of the allocated space. + * @Return IMG_SUCCESS or an error code. + */ +int addr_cx_malloc_align_res(struct addr_context *const context, + const unsigned char *const name, + unsigned long long size, + unsigned long long alignment, + unsigned long long *const base_adr); + +/* + * @Function addr_cx_free + * @Description + * This function is used free a previously allocate space within a memory region + * with an external context structure. + * NOTE: Invalid parameters are trapped by asserts. + * @Input context : Pointer to context structure. + * @Input name : Is a pointer the name of the memory region. + * NULL is used to free space from the + * default memory region. + * @Input addr : The address allocated. + * @Return IMG_SUCCESS or an error code. + */ +int addr_cx_free(struct addr_context *const context, + const unsigned char *const name, + unsigned long long addr); + +#endif /* __ADDR_ALLOC_H__ */ diff --git a/drivers/staging/media/vxd/common/hash.c b/drivers/staging/media/vxd/common/hash.c new file mode 100644 index 000000000000..1a03aecc34ef --- /dev/null +++ b/drivers/staging/media/vxd/common/hash.c @@ -0,0 +1,481 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Self scaling hash tables. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Sidraya Jayagond + */ + +#include +#include +#include +#include + +#include "hash.h" +#include "img_errors.h" +#include "pool.h" + +/* pool of struct hash objects */ +static struct pool *global_hashpool; + +/* pool of struct bucket objects */ +static struct pool *global_bucketpool; + +static int global_initialized; + +/* Each entry in a hash table is placed into a bucket */ +struct bucket { + struct bucket *next; + unsigned long long key; + unsigned long long value; +}; + +struct hash { + struct bucket **table; + unsigned int size; + unsigned int count; + unsigned int minimum_size; +}; + +/** + * hash_func - Hash function intended for hashing addresses. + * @vale : The key to hash. + * @size : The size of the hash table + */ +static unsigned int hash_func(unsigned long long vale, + unsigned int size) +{ + unsigned int hash = (unsigned int)(vale); + + hash += (hash << 12); + hash ^= (hash >> 22); + hash += (hash << 4); + hash ^= (hash >> 9); + hash += (hash << 10); + hash ^= (hash >> 2); + hash += (hash << 7); + hash ^= (hash >> 12); + hash &= (size - 1); + return hash; +} + +/* + * @Function hash_chain_insert + * @Description + * Hash function intended for hashing addresses. + * @Input bucket : The bucket + * @Input table : The hash table + * @Input size : The size of the hash table + * @Return IMG_SUCCESS or an error code. + */ +static int hash_chain_insert(struct bucket *bucket, + struct bucket **table, + unsigned int size) +{ + unsigned int idx; + unsigned int result = IMG_ERROR_FATAL; + + if (!bucket || !table || !size) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + idx = hash_func(bucket->key, size); + + if (idx < size) { + result = IMG_SUCCESS; + bucket->next = table[idx]; + table[idx] = bucket; + } + + return result; +} + +/* + * @Function hash_rehash + * @Description + * Iterate over every entry in an old hash table and rehash into the new table. + * @Input old_table : The old hash table + * @Input old_size : The size of the old hash table + * @Input new_table : The new hash table + * @Input new_sz : The size of the new hash table + * @Return IMG_SUCCESS or an error code. + */ +static int hash_rehash(struct bucket **old_table, + unsigned int old_size, + struct bucket **new_table, + unsigned int new_sz) +{ + unsigned int idx; + unsigned int result = IMG_ERROR_FATAL; + + if (!old_table || !new_table) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + for (idx = 0; idx < old_size; idx++) { + struct bucket *bucket; + struct bucket *nex_bucket; + + bucket = old_table[idx]; + while (bucket) { + nex_bucket = bucket->next; + result = hash_chain_insert(bucket, new_table, new_sz); + if (result != IMG_SUCCESS) { + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + bucket = nex_bucket; + } + } + result = IMG_SUCCESS; + + return result; +} + +/* + * @Function hash_resize + * @Description + * Attempt to resize a hash table, failure to allocate a new larger hash table + * is not considered a hard failure. We simply continue and allow the table to + * fill up, the effect is to allow hash chains to become longer. + * @Input hash_arg : Pointer to the hash table + * @Input new_sz : The size of the new hash table + * @Return IMG_SUCCESS or an error code. + */ +static int hash_resize(struct hash *hash_arg, + unsigned int new_sz) +{ + unsigned int malloc_sz = 0; + unsigned int result = IMG_ERROR_FATAL; + unsigned int idx; + + if (!hash_arg) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + if (new_sz != hash_arg->size) { + struct bucket **new_bkt_table; + + malloc_sz = (sizeof(struct bucket *) * new_sz); + new_bkt_table = kmalloc(malloc_sz, GFP_KERNEL); + + if (!new_bkt_table) { + result = IMG_ERROR_MALLOC_FAILED; + return result; + } + + for (idx = 0; idx < new_sz; idx++) + new_bkt_table[idx] = NULL; + + result = hash_rehash(hash_arg->table, + hash_arg->size, + new_bkt_table, + new_sz); + + if (result != IMG_SUCCESS) { + kfree(new_bkt_table); + new_bkt_table = NULL; + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + + kfree(hash_arg->table); + hash_arg->table = new_bkt_table; + hash_arg->size = new_sz; + } + result = IMG_SUCCESS; + + return result; +} + +static unsigned int private_max(unsigned int a, unsigned int b) +{ + unsigned int ret = (a > b) ? a : b; + return ret; +} + +/* + * @Function vid_hash_initialise + * @Description + * To initialise the hash module. + * @Input None + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_initialise(void) +{ + unsigned int result = IMG_ERROR_ALREADY_COMPLETE; + + if (!global_initialized) { + if (global_hashpool || global_bucketpool) { + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + + result = pool_create("img-hash", + sizeof(struct hash), + &global_hashpool); + + if (result != IMG_SUCCESS) { + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + + result = pool_create("img-sBucket", + sizeof(struct bucket), + &global_bucketpool); + if (result != IMG_SUCCESS) { + if (global_bucketpool) { + result = pool_delete(global_bucketpool); + global_bucketpool = NULL; + } + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + global_initialized = true; + result = IMG_SUCCESS; + } + return result; +} + +/* + * @Function vid_hash_finalise + * @Description + * To finalise the hash module. All allocated hash tables should + * be deleted before calling this function. + * @Input None + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_finalise(void) +{ + unsigned int result = IMG_ERROR_FATAL; + + if (global_initialized) { + if (global_hashpool) { + result = pool_delete(global_hashpool); + if (result != IMG_SUCCESS) + return result; + + global_hashpool = NULL; + } + + if (global_bucketpool) { + result = pool_delete(global_bucketpool); + if (result != IMG_SUCCESS) + return result; + + global_bucketpool = NULL; + } + global_initialized = false; + result = IMG_SUCCESS; + } + + return result; +} + +/* + * @Function vid_hash_create + * @Description + * Create a self scaling hash table. + * @Input initial_size : Initial and minimum size of the hash table. + * @Output hash_arg : Will countin the hash table handle or NULL. + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_create(unsigned int initial_size, + struct hash ** const hash_arg) +{ + unsigned int idx; + unsigned int tbl_sz = 0; + unsigned int result = IMG_ERROR_FATAL; + struct hash *local_hash = NULL; + + if (!hash_arg) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + if (global_initialized) { + pool_alloc(global_hashpool, ((void **)&local_hash)); + if (!local_hash) { + result = IMG_ERROR_UNEXPECTED_STATE; + *hash_arg = NULL; + return result; + } + + local_hash->count = 0; + local_hash->size = initial_size; + local_hash->minimum_size = initial_size; + + tbl_sz = (sizeof(struct bucket *) * local_hash->size); + local_hash->table = kmalloc(tbl_sz, GFP_KERNEL); + if (!local_hash->table) { + result = pool_free(global_hashpool, local_hash); + if (result != IMG_SUCCESS) + result = IMG_ERROR_UNEXPECTED_STATE; + result |= IMG_ERROR_MALLOC_FAILED; + *hash_arg = NULL; + return result; + } + + for (idx = 0; idx < local_hash->size; idx++) + local_hash->table[idx] = NULL; + + *hash_arg = local_hash; + result = IMG_SUCCESS; + } + return result; +} + +/* + * @Function vid_hash_delete + * @Description + * To delete a hash table, all entries in the table should be + * removed before calling this function. + * @Input hash_arg : Hash table pointer + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_delete(struct hash * const hash_arg) +{ + unsigned int result = IMG_ERROR_FATAL; + + if (!hash_arg) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + if (global_initialized) { + if (hash_arg->count != 0) { + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + + kfree(hash_arg->table); + hash_arg->table = NULL; + + result = pool_free(global_hashpool, hash_arg); + if (result != IMG_SUCCESS) { + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + } + return result; +} + +/* + * @Function vid_hash_insert + * @Description + * To insert a key value pair into a hash table. + * @Input hash_arg : Hash table pointer + * @Input key : Key value + * @Input value : The value associated with the key. + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_insert(struct hash * const hash_arg, + unsigned long long key, + unsigned long long value) +{ + struct bucket *ps_bucket = NULL; + unsigned int result = IMG_ERROR_FATAL; + + if (!hash_arg) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + if (global_initialized) { + result = pool_alloc(global_bucketpool, ((void **)&ps_bucket)); + if (result != IMG_SUCCESS || !ps_bucket) { + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + ps_bucket->next = NULL; + ps_bucket->key = key; + ps_bucket->value = value; + + result = hash_chain_insert(ps_bucket, + hash_arg->table, + hash_arg->size); + + if (result != IMG_SUCCESS) { + pool_free(global_bucketpool, ((void **)&ps_bucket)); + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + + hash_arg->count++; + + /* check if we need to think about re-balancing */ + if ((hash_arg->count << 1) > hash_arg->size) { + result = hash_resize(hash_arg, (hash_arg->size << 1)); + if (result != IMG_SUCCESS) { + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + } + result = IMG_SUCCESS; + } + return result; +} + +/* + * @Function vid_hash_remove + * @Description + * To remove a key value pair from a hash table + * @Input hash_arg : Hash table pointer + * @Input key : Key value + * @Input ret_result : 0 if the key is missing or the value + * associated with the key. + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_remove(struct hash * const hash_arg, + unsigned long long key, + unsigned long * const ret_result) +{ + unsigned int idx; + unsigned int tmp1 = 0; + unsigned int tmp2 = 0; + unsigned int result = IMG_ERROR_FATAL; + struct bucket **bucket = NULL; + + if (!hash_arg) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + idx = hash_func(key, hash_arg->size); + + for (bucket = &hash_arg->table[idx]; (*bucket) != NULL; + bucket = &((*bucket)->next)) { + if ((*bucket)->key == key) { + struct bucket *ps_bucket = (*bucket); + + unsigned long long value = ps_bucket->value; + + *bucket = ps_bucket->next; + result = pool_free(global_bucketpool, ps_bucket); + + hash_arg->count--; + + /* check if we need to think about re-balencing */ + if (hash_arg->size > (hash_arg->count << 2) && + hash_arg->size > hash_arg->minimum_size) { + tmp1 = (hash_arg->size >> 1); + tmp2 = hash_arg->minimum_size; + result = hash_resize(hash_arg, + private_max(tmp1, tmp2)); + } + *ret_result = value; + result = IMG_SUCCESS; + break; + } + } + return result; +} diff --git a/drivers/staging/media/vxd/common/hash.h b/drivers/staging/media/vxd/common/hash.h new file mode 100644 index 000000000000..91034d1ba441 --- /dev/null +++ b/drivers/staging/media/vxd/common/hash.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Self scaling hash tables. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Sidraya Jayagond + */ +#ifndef _HASH_H_ +#define _HASH_H_ + +#include +struct hash; + +/** + * vid_hash_initialise - VID_HASH_Initialise + * @Input None + * + * To initialise the hash module. + */ +int vid_hash_initialise(void); + +/* + * @Function VID_HASH_Finalise + * @Description + * To finalise the hash module. All allocated hash tables should + * be deleted before calling this function. + * @Input None + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_finalise(void); + +/* + * @Function VID_HASH_Create + * @Description + * Create a self scaling hash table. + * @Input initial_size : Initial and minimum size of the hash table. + * @Output hash : Hash table handle or NULL. + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_create(unsigned int initial_size, + struct hash ** const hash_hndl); + +/* + * @Function VID_HASH_Delete + * @Description + * To delete a hash table, all entries in the table should be + * removed before calling this function. + * @Input hash : Hash table pointer + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_delete(struct hash * const ps_hash); + +/* + * @Function VID_HASH_Insert + * @Description + * To insert a key value pair into a hash table. + * @Input ps_hash : Hash table pointer + * @Input key : Key value + * @Input value : The value associated with the key. + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_insert(struct hash * const ps_hash, + unsigned long long key, + unsigned long long value); + +/* + * @Function VID_HASH_Remove + * @Description + * To remove a key value pair from a hash table + * @Input ps_hash : Hash table pointer + * @Input key : Key value + * @Input result : 0 if the key is missing or the value + * associated with the key. + * @Return IMG_SUCCESS or an error code. + */ +int vid_hash_remove(struct hash * const ps_hash, + unsigned long long key, + unsigned long * const result); + +#endif /* _HASH_H_ */ diff --git a/drivers/staging/media/vxd/common/pool.c b/drivers/staging/media/vxd/common/pool.c new file mode 100644 index 000000000000..c0cb1e465c50 --- /dev/null +++ b/drivers/staging/media/vxd/common/pool.c @@ -0,0 +1,228 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Object Pool Memory Allocator + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Sidraya Jayagond + */ + +#include +#include +#include +#include + +#include "img_errors.h" +#include "pool.h" + +#define BUFF_MAX_SIZE 4096 +#define BUFF_MAX_GROW 32 + +/* 64 bits */ +#define ALIGN_SIZE (sizeof(long long) - 1) + +struct pool { + unsigned char *name; + unsigned int size; + unsigned int grow; + struct buffer *buffers; + struct object *objects; +}; + +struct buffer { + struct buffer *next; +}; + +struct object { + struct object *next; +}; + +static inline unsigned char *strdup_cust(const unsigned char *str) +{ + unsigned char *r = kmalloc(strlen(str) + 1, GFP_KERNEL); + + if (r) + strcpy(r, str); + return r; +} + +/* + * pool_create - Create an sObject pool + * @name: Name of sObject pool for diagnostic purposes + * @obj_size: size of each sObject in the pool in bytes + * @pool_hdnl: Will contain NULL or sObject pool handle + * + * This function Create an sObject pool + */ + +int pool_create(const unsigned char * const name, + unsigned int obj_size, + struct pool ** const pool_hdnl) +{ + struct pool *local_pool = NULL; + unsigned int result = IMG_ERROR_FATAL; + + if (!name || !pool_hdnl) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + local_pool = kmalloc((sizeof(*local_pool)), GFP_KERNEL); + if (!local_pool) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + local_pool->name = strdup_cust((unsigned char *)name); + local_pool->size = obj_size; + local_pool->buffers = NULL; + local_pool->objects = NULL; + local_pool->grow = + (BUFF_MAX_SIZE - sizeof(struct buffer)) / + (obj_size + ALIGN_SIZE); + + if (local_pool->grow == 0) + local_pool->grow = 1; + else if (local_pool->grow > BUFF_MAX_GROW) + local_pool->grow = BUFF_MAX_GROW; + + *pool_hdnl = local_pool; + result = IMG_SUCCESS; + + return result; +} + +/* + * @Function pool_delete + * @Description + * Delete an sObject pool. All psObjects allocated from the pool must + * be free'd with pool_free() before deleting the sObject pool. + * @Input pool : Object Pool pointer + * @Return IMG_SUCCESS or an error code. + */ +int pool_delete(struct pool * const pool_arg) +{ + struct buffer *local_buf = NULL; + unsigned int result = IMG_ERROR_FATAL; + + if (!pool_arg) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + local_buf = pool_arg->buffers; + while (local_buf) { + local_buf = local_buf->next; + kfree(pool_arg->buffers); + pool_arg->buffers = local_buf; + } + + kfree(pool_arg->name); + pool_arg->name = NULL; + + kfree(pool_arg); + result = IMG_SUCCESS; + + return result; +} + +/* + * @Function pool_alloc + * @Description + * Allocate an sObject from an sObject pool. + * @Input pool_arg : Object Pool + * @Output obj_hndl : Pointer containing the handle to the + * object created or IMG_NULL + * @Return IMG_SUCCESS or an error code. + */ +int pool_alloc(struct pool * const pool_arg, + void ** const obj_hndl) +{ + struct object *local_obj1 = NULL; + struct buffer *local_buf = NULL; + unsigned int idx = 0; + unsigned int sz = 0; + unsigned int result = IMG_ERROR_FATAL; + + if (!pool_arg || !obj_hndl) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + if (!pool_arg->objects) { + sz = (pool_arg->size + ALIGN_SIZE); + sz *= (pool_arg->grow + sizeof(struct buffer)); + local_buf = kmalloc(sz, GFP_KERNEL); + if (!local_buf) { + result = IMG_ERROR_MALLOC_FAILED; + return result; + } + + local_buf->next = pool_arg->buffers; + pool_arg->buffers = local_buf; + + for (idx = 0; idx < pool_arg->grow; idx++) { + struct object *local_obj2; + unsigned char *temp_ptr = NULL; + + local_obj2 = (struct object *)(((unsigned char *)(local_buf + 1)) + + (idx * (pool_arg->size + ALIGN_SIZE))); + + temp_ptr = (unsigned char *)local_obj2; + if ((unsigned long)temp_ptr & ALIGN_SIZE) { + temp_ptr += ((ALIGN_SIZE + 1) + - ((unsigned long)temp_ptr & ALIGN_SIZE)); + local_obj2 = (struct object *)temp_ptr; + } + + local_obj2->next = pool_arg->objects; + pool_arg->objects = local_obj2; + } + } + + if (!pool_arg->objects) { + result = IMG_ERROR_UNEXPECTED_STATE; + return result; + } + + local_obj1 = pool_arg->objects; + pool_arg->objects = local_obj1->next; + + *obj_hndl = (void *)(local_obj1); + result = IMG_SUCCESS; + + return result; +} + +/* + * @Function pool_free + * @Description + * Free an sObject previously allocated from an sObject pool. + * @Input pool_arg : Object Pool pointer. + * @Output h_object : Handle to the object to be freed. + * @Return IMG_SUCCESS or an error code. + */ +int pool_free(struct pool * const pool_arg, + void * const obj_hndl) +{ + struct object *object = NULL; + unsigned int result = IMG_ERROR_FATAL; + + if (!pool_arg || !obj_hndl) { + result = IMG_ERROR_INVALID_PARAMETERS; + return result; + } + + object = (struct object *)obj_hndl; + object->next = pool_arg->objects; + pool_arg->objects = object; + + result = IMG_SUCCESS; + + return result; +} diff --git a/drivers/staging/media/vxd/common/pool.h b/drivers/staging/media/vxd/common/pool.h new file mode 100644 index 000000000000..d22d15a2af54 --- /dev/null +++ b/drivers/staging/media/vxd/common/pool.h @@ -0,0 +1,66 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Object Pool Memory Allocator header + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Sidraya Jayagond + */ +#ifndef _pool_h_ +#define _pool_h_ + +#include + +struct pool; + +/** + * pool_create - Create an sObject pool + * @name: Name of sObject pool for diagnostic purposes + * @obj_size: size of each sObject in the pool in bytes + * @pool: Will contain NULL or sObject pool handle + * + * Return IMG_SUCCESS or an error code. + */ +int pool_create(const unsigned char * const name, + unsigned int obj_size, + struct pool ** const pool); + +/* + * @Function pool_delete + * @Description + * Delete an sObject pool. All psObjects allocated from the pool must + * be free'd with pool_free() before deleting the sObject pool. + * @Input pool : Object Pool pointer + * @Return IMG_SUCCESS or an error code. + */ +int pool_delete(struct pool * const pool); + +/* + * @Function pool_alloc + * @Description + * Allocate an Object from an Object pool. + * @Input pool : Object Pool + * @Output obj_hdnl : Pointer containing the handle to the + * object created or IMG_NULL + * @Return IMG_SUCCESS or an error code. + */ +int pool_alloc(struct pool * const pool, + void ** const obj_hdnl); + +/* + * @Function pool_free + * @Description + * Free an sObject previously allocated from an sObject pool. + * @Input pool : Object Pool pointer. + * @Output obj_hdnl : Handle to the object to be freed. + * @Return IMG_SUCCESS or an error code. + */ +int pool_free(struct pool * const pool, + void * const obj_hdnl); + +#endif /* _pool_h_ */ diff --git a/drivers/staging/media/vxd/common/ra.c b/drivers/staging/media/vxd/common/ra.c new file mode 100644 index 000000000000..ac07737f351b --- /dev/null +++ b/drivers/staging/media/vxd/common/ra.c @@ -0,0 +1,972 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Implements generic resource allocation. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Sidraya Jayagond + */ + +#include +#include +#include +#include + +#include "hash.h" +#include "img_errors.h" +#include "pool.h" +#include "ra.h" + +static unsigned char global_init; + +/* pool of struct arena's */ +static struct pool *global_pool_arena; + +/* pool of struct boundary tag */ +static struct pool *global_pool_bt; + +/** + * ra_request_alloc_fail - ra_request_alloc_fail + * @import_hdnl : Callback handle. + * @requested_size : Requested allocation size. + * @ref : Pointer to user reference data. + * @alloc_flags : Allocation flags. + * @actual_size : Pointer to contain the actual allocated size. + * @base_addr : Allocation base(always 0,it is failing). + * + * Default callback allocator used if no callback is specified, always fails + * to allocate further resources to the arena. + */ +static int ra_request_alloc_fail(void *import_hdnl, + unsigned long long requested_size, + unsigned long long *actual_size, + void **ref, + unsigned int alloc_flags, + unsigned long long *base_addr) +{ + if (base_addr) + *base_addr = 0; + + return IMG_SUCCESS; +} + +/* + * @Function ra_log2 + * @Description + * Calculates the Log2(n) with n being a 64-bit value. + * + * @Input value : Input value. + * @Output None + * @Return result : Log2(ui64Value). + */ + +static unsigned int ra_log2(unsigned long long value) +{ + int res = 0; + + value >>= 1; + while (value > 0) { + value >>= 1; + res++; + } + return res; +} + +/* + * @Function ra_segment_list_insert_after + * @Description Insert a boundary tag into an arena segment list after a + * specified boundary tag. + * @Input arena_arg : Pointer to the input arena. + * @Input bt_here_arg : The boundary tag before which psBTToInsert + * will be added . + * @Input bt_to_insert_arg : The boundary tag to insert. + * @Output None + * @Return None + */ +static void ra_segment_list_insert_after(struct arena *arena_arg, + struct btag *bt_here_arg, + struct btag *bt_to_insert_arg) +{ + bt_to_insert_arg->nxt_seg = bt_here_arg->nxt_seg; + bt_to_insert_arg->prv_seg = bt_here_arg; + + if (!bt_here_arg->nxt_seg) + arena_arg->tail_seg = bt_to_insert_arg; + else + bt_here_arg->nxt_seg->prv_seg = bt_to_insert_arg; + + bt_here_arg->nxt_seg = bt_to_insert_arg; +} + +/* + * @Function ra_segment_list_insert + * @Description + * Insert a boundary tag into an arena segment list at the appropriate point. + * @Input arena_arg : Pointer to the input arena. + * @Input bt_to_insert_arg : The boundary tag to insert. + * @Output None + * @Return None + */ +static void ra_segment_list_insert(struct arena *arena_arg, + struct btag *bt_to_insert_arg) +{ + /* insert into the segment chain */ + if (!arena_arg->head_seg) { + arena_arg->head_seg = bt_to_insert_arg; + arena_arg->tail_seg = bt_to_insert_arg; + bt_to_insert_arg->nxt_seg = NULL; + bt_to_insert_arg->prv_seg = NULL; + } else { + struct btag *bt_scan = arena_arg->head_seg; + + while (bt_scan->nxt_seg && + bt_to_insert_arg->base >= + bt_scan->nxt_seg->base) { + bt_scan = bt_scan->nxt_seg; + } + ra_segment_list_insert_after(arena_arg, + bt_scan, + bt_to_insert_arg); + } +} + +/* + * @Function ra_SegmentListRemove + * @Description + * Insert a boundary tag into an arena segment list at the appropriate point. + * @Input arena_arg : Pointer to the input arena. + * @Input bt_to_remove_arg : The boundary tag to insert. + * @Output None + * @Return None + */ +static void ra_segment_list_remove(struct arena *arena_arg, + struct btag *bt_to_remove_arg) +{ + if (!bt_to_remove_arg->prv_seg) + arena_arg->head_seg = bt_to_remove_arg->nxt_seg; + else + bt_to_remove_arg->prv_seg->nxt_seg = bt_to_remove_arg->nxt_seg; + + if (!bt_to_remove_arg->nxt_seg) + arena_arg->tail_seg = bt_to_remove_arg->prv_seg; + else + bt_to_remove_arg->nxt_seg->prv_seg = bt_to_remove_arg->prv_seg; +} + +/* + * @Function ra_segment_split + * @Description + * Split a segment into two, maintain the arena segment list. + * The boundary tag should not be in the free table. Neither the original or + * the new psBTNeighbour bounary tag will be in the free table. + * @Input arena_arg : Pointer to the input arena. + * @Input bt_to_split_arg : The boundary tag to split. + * The required segment size of boundary tag after the split. + * @Output None + * @Return btag *: New boundary tag. + */ +static struct btag *ra_segment_split(struct arena *arena_arg, + struct btag *bt_to_split_arg, + unsigned long long size) +{ + struct btag *local_bt_neighbour = NULL; + int res = IMG_ERROR_FATAL; + + res = pool_alloc(global_pool_bt, ((void **)&local_bt_neighbour)); + if (res != IMG_SUCCESS) + return NULL; + + local_bt_neighbour->prv_seg = bt_to_split_arg; + local_bt_neighbour->nxt_seg = bt_to_split_arg->nxt_seg; + local_bt_neighbour->bt_type = RA_BOUNDARY_TAG_TYPE_FREE; + local_bt_neighbour->size = (bt_to_split_arg->size - size); + local_bt_neighbour->base = (bt_to_split_arg->base + size); + local_bt_neighbour->nxt_free = NULL; + local_bt_neighbour->prv_free = NULL; + local_bt_neighbour->ref = bt_to_split_arg->ref; + + if (!bt_to_split_arg->nxt_seg) + arena_arg->tail_seg = local_bt_neighbour; + else + bt_to_split_arg->nxt_seg->prv_seg = local_bt_neighbour; + + bt_to_split_arg->nxt_seg = local_bt_neighbour; + bt_to_split_arg->size = size; + + return local_bt_neighbour; +} + +/* + * @Function ra_free_list_insert + * @Description + * Insert a boundary tag into an arena free table. + * @Input arena_arg : Pointer to the input arena. + * @Input bt_arg : The boundary tag to insert into an arena + * free table. + * @Output None + * @Return None + */ +static void ra_free_list_insert(struct arena *arena_arg, + struct btag *bt_arg) +{ + unsigned int index = ra_log2(bt_arg->size); + + bt_arg->bt_type = RA_BOUNDARY_TAG_TYPE_FREE; + if (index < FREE_TABLE_LIMIT) + bt_arg->nxt_free = arena_arg->head_free[index]; + else + bt_arg->nxt_free = NULL; + + bt_arg->prv_free = NULL; + + if (index < FREE_TABLE_LIMIT) { + if (arena_arg->head_free[index]) + arena_arg->head_free[index]->prv_free = bt_arg; + } + + if (index < FREE_TABLE_LIMIT) + arena_arg->head_free[index] = bt_arg; +} + +/* + * @Function ra_free_list_remove + * @Description + * Remove a boundary tag from an arena free table. + * @Input arena_arg : Pointer to the input arena. + * @Input bt_arg : The boundary tag to remove from + * an arena free table. + * @Output None + * @Return None + */ +static void ra_free_list_remove(struct arena *arena_arg, + struct btag *bt_arg) +{ + unsigned int index = ra_log2(bt_arg->size); + + if (bt_arg->nxt_free) + bt_arg->nxt_free->prv_free = bt_arg->prv_free; + + if (!bt_arg->prv_free && index < FREE_TABLE_LIMIT) + arena_arg->head_free[index] = bt_arg->nxt_free; + else if (bt_arg->prv_free) + bt_arg->prv_free->nxt_free = bt_arg->nxt_free; +} + +/* + * @Function ra_build_span_marker + * @Description + * Construct a span marker boundary tag. + * @Input base : The base of the boundary tag. + * @Output None + * @Return btag * : New span marker boundary tag + */ +static struct btag *ra_build_span_marker(unsigned long long base) +{ + struct btag *local_bt = NULL; + int res = IMG_ERROR_FATAL; + + res = pool_alloc(global_pool_bt, ((void **)&local_bt)); + if (res != IMG_SUCCESS) + return NULL; + + local_bt->bt_type = RA_BOUNDARY_TAG_TYPE_SPAN; + local_bt->base = base; + local_bt->size = 0; + local_bt->nxt_seg = NULL; + local_bt->prv_seg = NULL; + local_bt->nxt_free = NULL; + local_bt->prv_free = NULL; + local_bt->ref = NULL; + + return local_bt; +} + +/* + * @Function ra_build_bt + * @Description + * Construct a boundary tag for a free segment. + * @Input ui64Base : The base of the resource segment. + * @Input ui64Size : The extent of the resource segment. + * @Output None + * @Return btag * : New boundary tag + */ +static struct btag *ra_build_bt(unsigned long long base, unsigned long long size) +{ + struct btag *local_bt = NULL; + int res = IMG_ERROR_FATAL; + + res = pool_alloc(global_pool_bt, ((void **)&local_bt)); + + if (res != IMG_SUCCESS) + return local_bt; + + local_bt->bt_type = RA_BOUNDARY_TAG_TYPE_FREE; + local_bt->base = base; + local_bt->size = size; + local_bt->nxt_seg = NULL; + local_bt->prv_seg = NULL; + local_bt->nxt_free = NULL; + local_bt->prv_free = NULL; + local_bt->ref = NULL; + + return local_bt; +} + +/* + * @Function ra_insert_resource + * @Description + * Add a free resource segment to an arena. + * @Input base : The base of the resource segment. + * @Input size : The size of the resource segment. + * @Output None + * @Return IMG_SUCCESS or an error code. + */ +static int ra_insert_resource(struct arena *arena_arg, + unsigned long long base, + unsigned long long size) +{ + struct btag *local_bt = NULL; + + local_bt = ra_build_bt(base, size); + if (!local_bt) + return IMG_ERROR_UNEXPECTED_STATE; + + ra_segment_list_insert(arena_arg, local_bt); + ra_free_list_insert(arena_arg, local_bt); + arena_arg->max_idx = ra_log2(size); + if (1ULL << arena_arg->max_idx < size) + arena_arg->max_idx++; + + return IMG_SUCCESS; +} + +/* + * @Function ra_insert_resource_span + * @Description + * Add a free resource span to an arena, complete with span markers. + * @Input arena_arg : Pointer to the input arena. + * @Input base : The base of the resource segment. + * @Input size : The size of the resource segment. + * @Output None + * @Return btag * : The boundary tag representing + * the free resource segment. + */ +static struct btag *ra_insert_resource_span(struct arena *arena_arg, + unsigned long long base, + unsigned long long size) +{ + struct btag *local_bt = NULL; + struct btag *local_bt_span_start = NULL; + struct btag *local_bt_span_end = NULL; + + local_bt_span_start = ra_build_span_marker(base); + if (!local_bt_span_start) + return NULL; + + local_bt_span_end = ra_build_span_marker(base + size); + if (!local_bt_span_end) { + pool_free(global_pool_bt, local_bt_span_start); + return NULL; + } + + local_bt = ra_build_bt(base, size); + if (!local_bt) { + pool_free(global_pool_bt, local_bt_span_end); + pool_free(global_pool_bt, local_bt_span_start); + return NULL; + } + + ra_segment_list_insert(arena_arg, local_bt_span_start); + ra_segment_list_insert_after(arena_arg, + local_bt_span_start, + local_bt); + ra_free_list_insert(arena_arg, local_bt); + ra_segment_list_insert_after(arena_arg, + local_bt, + local_bt_span_end); + + return local_bt; +} + +/* + * @Function ra_free_bt + * @Description + * Free a boundary tag taking care of the segment list and the + * boundary tag free table. + * @Input arena_arg : Pointer to the input arena. + * @Input bt_arg : The boundary tag to free. + * @Output None + * @Return None + */ +static void ra_free_bt(struct arena *arena_arg, + struct btag *bt_arg) +{ + struct btag *bt_neibr; + + /* try and coalesce with left bt_neibr */ + bt_neibr = bt_arg->prv_seg; + if (bt_neibr && + bt_neibr->bt_type == RA_BOUNDARY_TAG_TYPE_FREE && + bt_neibr->base + bt_neibr->size == bt_arg->base) { + ra_free_list_remove(arena_arg, bt_neibr); + ra_segment_list_remove(arena_arg, bt_neibr); + bt_arg->base = bt_neibr->base; + bt_arg->size += bt_neibr->size; + pool_free(global_pool_bt, bt_neibr); + } + + /* try to coalesce with right psBTNeighbour */ + bt_neibr = bt_arg->nxt_seg; + if (bt_neibr && + bt_neibr->bt_type == RA_BOUNDARY_TAG_TYPE_FREE && + bt_arg->base + bt_arg->size == bt_neibr->base) { + ra_free_list_remove(arena_arg, bt_neibr); + ra_segment_list_remove(arena_arg, bt_neibr); + bt_arg->size += bt_neibr->size; + pool_free(global_pool_bt, bt_neibr); + } + + if (bt_arg->nxt_seg && + bt_arg->nxt_seg->bt_type == RA_BOUNDARY_TAG_TYPE_SPAN && + bt_arg->prv_seg && bt_arg->prv_seg->bt_type == + RA_BOUNDARY_TAG_TYPE_SPAN) { + struct btag *ps_bt_nxt = bt_arg->nxt_seg; + struct btag *ps_bt_prev = bt_arg->prv_seg; + + ra_segment_list_remove(arena_arg, ps_bt_nxt); + ra_segment_list_remove(arena_arg, ps_bt_prev); + ra_segment_list_remove(arena_arg, bt_arg); + arena_arg->import_free_fxn(arena_arg->import_hdnl, + bt_arg->base, + bt_arg->ref); + pool_free(global_pool_bt, ps_bt_nxt); + pool_free(global_pool_bt, ps_bt_prev); + pool_free(global_pool_bt, bt_arg); + } else { + ra_free_list_insert(arena_arg, bt_arg); + } +} + +static int ra_check_btag(struct arena *arena_arg, + unsigned long long size_arg, + void **ref, + struct btag *bt_arg, + unsigned long long align_arg, + unsigned long long *base_arg, + unsigned int align_log2) +{ + unsigned long long local_align_base; + int res = IMG_ERROR_FATAL; + + while (bt_arg) { + if (align_arg > 1ULL) + local_align_base = ((bt_arg->base + align_arg - 1) + >> align_log2) << align_log2; + else + local_align_base = bt_arg->base; + + if ((bt_arg->base + bt_arg->size) >= + (local_align_base + size_arg)) { + ra_free_list_remove(arena_arg, bt_arg); + + /* + * with align_arg we might need to discard the front of + * this segment + */ + if (local_align_base > bt_arg->base) { + struct btag *btneighbor; + + btneighbor = ra_segment_split(arena_arg, + bt_arg, + (local_align_base - + bt_arg->base)); + /* + * Partition the buffer, create a new boundary + * tag + */ + if (!btneighbor) + return IMG_ERROR_UNEXPECTED_STATE; + + ra_free_list_insert(arena_arg, bt_arg); + bt_arg = btneighbor; + } + + /* + * The segment might be too big, if so, discard the back + * of the segment + */ + if (bt_arg->size > size_arg) { + struct btag *btneighbor; + + btneighbor = ra_segment_split(arena_arg, + bt_arg, + size_arg); + /* + * Partition the buffer, create a new boundary + * tag + */ + if (!btneighbor) + return IMG_ERROR_UNEXPECTED_STATE; + + ra_free_list_insert(arena_arg, btneighbor); + } + + bt_arg->bt_type = RA_BOUNDARY_TAG_TYPE_LIVE; + + res = vid_hash_insert(arena_arg->hash_tbl, + bt_arg->base, + (unsigned long)bt_arg); + if (res != IMG_SUCCESS) { + ra_free_bt(arena_arg, bt_arg); + *base_arg = 0; + return IMG_ERROR_UNEXPECTED_STATE; + } + + if (ref) + *ref = bt_arg->ref; + + *base_arg = bt_arg->base; + return IMG_SUCCESS; + } + bt_arg = bt_arg->nxt_free; + } + + return res; +} + +/* + * @Function ra_attempt_alloc_aligned + * @Description Attempt to allocate from an arena + * @Input arena_arg: Pointer to the input arena + * @Input size_arg: The requested allocation size + * @Input ref: The user references associated with the allocated + * segment + * @Input align_arg: Required alignment + * @Output base_arg: Allocated resource size + * @Return IMG_SUCCESS or an error code + */ +static int ra_attempt_alloc_aligned(struct arena *arena_arg, + unsigned long long size_arg, + void **ref, + unsigned long long align_arg, + unsigned long long *base_arg) +{ + unsigned int index; + unsigned int align_log2; + int res = IMG_ERROR_FATAL; + + if (!arena_arg || !base_arg) + return IMG_ERROR_INVALID_PARAMETERS; + + /* + * Take the log of the alignment to get number of bits to shift + * left/right for multiply/divide. Assumption made here is that + * alignment has to be a power of 2 value. Aserting otherwise. + */ + align_log2 = ra_log2(align_arg); + + /* + * Search for a near fit free boundary tag, start looking at the + * log2 free table for our required size and work on up the table. + */ + index = ra_log2(size_arg); + + /* + * If the Size required is exactly 2**n then use the n bucket, because + * we know that every free block in that bucket is larger than 2**n, + * otherwise start at then next bucket up. + */ + if (size_arg > (1ull << index)) + index++; + + while ((index < FREE_TABLE_LIMIT) && !arena_arg->head_free[index]) + index++; + + if (index >= FREE_TABLE_LIMIT) { + pr_err("requested allocation size doesn't fit in the arena. Increase MMU HEAP Size\n"); + return IMG_ERROR_OUT_OF_MEMORY; + } + + while (index < FREE_TABLE_LIMIT) { + if (arena_arg->head_free[index]) { + /* we have a cached free boundary tag */ + struct btag *local_bt = + arena_arg->head_free[index]; + + res = ra_check_btag(arena_arg, + size_arg, + ref, + local_bt, + align_arg, + base_arg, + align_log2); + if (res != IMG_SUCCESS) + return res; + } + index++; + } + + return IMG_SUCCESS; +} + +/* + * @Function vid_ra_init + * @Description Initializes the RA module. Must be called before any other + * ra API function + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_initialise(void) +{ + int res = IMG_ERROR_FATAL; + + if (!global_init) { + res = pool_create("img-arena", + sizeof(struct arena), + &global_pool_arena); + if (res != IMG_SUCCESS) + return IMG_ERROR_UNEXPECTED_STATE; + + res = pool_create("img-bt", + sizeof(struct btag), + &global_pool_bt); + if (res != IMG_SUCCESS) { + res = pool_delete(global_pool_arena); + global_pool_arena = NULL; + return IMG_ERROR_UNEXPECTED_STATE; + } + global_init = 1; + res = IMG_SUCCESS; + } + + return res; +} + +/* + * @Function vid_ra_deinit + * @Description Deinitializes the RA module + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_deinit(void) +{ + int res = IMG_ERROR_FATAL; + + if (global_init) { + if (global_pool_arena) { + res = pool_delete(global_pool_arena); + global_pool_arena = NULL; + } + if (global_pool_bt) { + res = pool_delete(global_pool_bt); + global_pool_bt = NULL; + } + global_init = 0; + res = IMG_SUCCESS; + } + return res; +} + +/* + * @Function vid_ra_create + * @Description Used to create a resource arena. + * @Input name: The name of the arena for diagnostic purposes + * @Input base_arg: The base of an initial resource span or 0 + * @Input size_arg: The size of an initial resource span or 0 + * @Input quantum: The arena allocation quantum + * @Input (*import_alloc_fxn): A resource allocation callback or NULL + * @Input (*import_free_fxn): A resource de-allocation callback or NULL + * @Input import_hdnl: Handle passed to alloc and free or NULL + * @Output arena_hndl: The handle for the arene being created, or NULL + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_create(const unsigned char * const name, + unsigned long long base_arg, + unsigned long long size_arg, + unsigned long quantum, + int (*import_alloc_fxn)(void * const import_hdnl, + unsigned long long req_sz, + unsigned long long * const actl_sz, + void ** const ref, + unsigned int alloc_flags, + unsigned long long * const base_arg), + int (*import_free_fxn)(void * const import_hdnl, + unsigned long long import_base, + void * const import_ref), + void *import_hdnl, + void **arena_hndl) +{ + struct arena *local_arena = NULL; + unsigned int idx = 0; + int res = IMG_ERROR_FATAL; + + if (!arena_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + *(arena_hndl) = NULL; + + if (global_init) { + res = pool_alloc(global_pool_arena, ((void **)&local_arena)); + if (!local_arena || res != IMG_SUCCESS) + return IMG_ERROR_UNEXPECTED_STATE; + + local_arena->name = NULL; + if (name) + local_arena->name = kstrdup((const signed char *)name, + GFP_KERNEL); + if (import_alloc_fxn) + local_arena->import_alloc_fxn = import_alloc_fxn; + else + local_arena->import_alloc_fxn = ra_request_alloc_fail; + + local_arena->import_free_fxn = import_free_fxn; + local_arena->import_hdnl = import_hdnl; + + for (idx = 0; idx < FREE_TABLE_LIMIT; idx++) + local_arena->head_free[idx] = NULL; + + local_arena->head_seg = NULL; + local_arena->tail_seg = NULL; + local_arena->quantum = quantum; + + res = vid_hash_create(MINIMUM_HASH_SIZE, + &local_arena->hash_tbl); + + if (!local_arena->hash_tbl) { + vid_hash_delete(local_arena->hash_tbl); + kfree(local_arena->name); + local_arena->name = NULL; + return IMG_ERROR_UNEXPECTED_STATE; + } + + //if (size_arg > (unsigned long long)0) { + if (size_arg > 0ULL) { + size_arg = (size_arg + quantum - 1) / quantum * quantum; + + res = ra_insert_resource(local_arena, + base_arg, + size_arg); + if (res != IMG_SUCCESS) { + vid_hash_delete(local_arena->hash_tbl); + pool_free(global_pool_arena, local_arena); + kfree(local_arena->name); + local_arena->name = NULL; + return IMG_ERROR_UNEXPECTED_STATE; + } + } + *(arena_hndl) = local_arena; + res = IMG_SUCCESS; + } + + return res; +} + +/* + * @Function vid_ra_delete + * @Description Used to delete a resource arena. All resources allocated from + * the arena must be freed before deleting the arena + * @Input arena_hndl: The handle to the arena to delete + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_delete(void * const arena_hndl) +{ + int res = IMG_ERROR_FATAL; + struct arena *local_arena = NULL; + unsigned int idx; + + if (!arena_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + if (global_init) { + local_arena = (struct arena *)arena_hndl; + kfree(local_arena->name); + local_arena->name = NULL; + for (idx = 0; idx < FREE_TABLE_LIMIT; idx++) + local_arena->head_free[idx] = NULL; + + while (local_arena->head_seg) { + struct btag *local_bt = local_arena->head_seg; + + ra_segment_list_remove(local_arena, local_bt); + } + res = vid_hash_delete(local_arena->hash_tbl); + if (res != IMG_SUCCESS) + return IMG_ERROR_UNEXPECTED_STATE; + + res = pool_free(global_pool_arena, local_arena); + if (res != IMG_SUCCESS) + return IMG_ERROR_UNEXPECTED_STATE; + } + + return res; +} + +/* + * @Function vid_ra_add + * @Description Used to add a resource span to an arena. The span must not + * overlap with any span previously added to the arena + * @Input base_arg: The base_arg of the span + * @Input size_arg: The size of the span + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_add(void * const arena_hndl, unsigned long long base_arg, unsigned long long size_arg) +{ + int res = IMG_ERROR_FATAL; + struct arena *local_arena = NULL; + + if (!arena_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + if (global_init) { + local_arena = (struct arena *)arena_hndl; + size_arg = (size_arg + local_arena->quantum - 1) / + local_arena->quantum * local_arena->quantum; + + res = ra_insert_resource(local_arena, base_arg, size_arg); + if (res != IMG_SUCCESS) + return IMG_ERROR_INVALID_PARAMETERS; + } + + return res; +} + +/* + * @Function vid_ra_alloc + * @Description Used to allocate resource from an arena + * @Input arena_hndl: The handle to the arena to create the resource + * @Input request_size: The requested size of resource segment + * @Input actl_size: The actualSize of resource segment + * @Input ref: The user reference associated with allocated resource + * span + * @Input alloc_flags: AllocationFlags influencing allocation policy + * @Input align_arg: The alignment constraint required for the allocated + * segment + * @Output base_args: The base of the allocated resource + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_alloc(void * const arena_hndl, + unsigned long long request_size, + unsigned long long * const actl_sz, + void ** const ref, + unsigned int alloc_flags, + unsigned long long alignarg, + unsigned long long * const basearg) +{ + int res = IMG_ERROR_FATAL; + struct arena *arn_ctx = NULL; + unsigned long long loc_size = request_size; + + if (!arena_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + if (global_init) { + arn_ctx = (struct arena *)arena_hndl; + loc_size = ((loc_size + arn_ctx->quantum - 1) / + arn_ctx->quantum) * arn_ctx->quantum; + + if (actl_sz) + *actl_sz = loc_size; + + /* + * If allocation failed then we might have an import source + * which can provide more resource, else we will have to fail + * the allocation to the caller. + */ + if (alloc_flags == RA_SEQUENTIAL_ALLOCATION) + res = ra_attempt_alloc_aligned(arn_ctx, + loc_size, + ref, + alignarg, + basearg); + + if (res != IMG_SUCCESS) { + void *import_ref = NULL; + unsigned long long import_base = 0ULL; + unsigned long long locimprt_reqsz = loc_size; + unsigned long long locimprt_actsz = 0ULL; + + res = arn_ctx->import_alloc_fxn(arn_ctx->import_hdnl, + locimprt_reqsz, + &locimprt_actsz, + &import_ref, + alloc_flags, + &import_base); + + if (res == IMG_SUCCESS) { + struct btag *local_bt = + ra_insert_resource_span(arn_ctx, + import_base, + locimprt_actsz); + + /* + * Successfully import more resource, create a + * span to represent it and retry the allocation + * attempt + */ + if (!local_bt) { + /* + * Insufficient resources to insert the + * newly acquired span, so free it back + */ + arn_ctx->import_free_fxn(arn_ctx->import_hdnl, + import_base, + import_ref); + return IMG_ERROR_UNEXPECTED_STATE; + } + local_bt->ref = import_ref; + if (alloc_flags == RA_SEQUENTIAL_ALLOCATION) { + res = ra_attempt_alloc_aligned(arn_ctx, + loc_size, + ref, + alignarg, + basearg); + } + } + } + } + + return res; +} + +/* + * @Function vid_ra_free + * @Description Used to free a resource segment + * @Input arena_hndl: The arena the segment was originally allocated from + * @Input base_arg: The base of the span + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_free(void * const arena_hndl, unsigned long long base_arg) +{ + int res = IMG_ERROR_FATAL; + struct arena *local_arena = NULL; + struct btag *local_bt = NULL; + unsigned long uip_res; + + if (!arena_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + if (global_init) { + local_arena = (struct arena *)arena_hndl; + + res = vid_hash_remove(local_arena->hash_tbl, + base_arg, + &uip_res); + if (res != IMG_SUCCESS) + return res; + local_bt = (struct btag *)uip_res; + + ra_free_bt(local_arena, local_bt); + } + + return res; +} diff --git a/drivers/staging/media/vxd/common/ra.h b/drivers/staging/media/vxd/common/ra.h new file mode 100644 index 000000000000..a4d529d635d7 --- /dev/null +++ b/drivers/staging/media/vxd/common/ra.h @@ -0,0 +1,200 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Implements generic resource allocation. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Sidraya Jayagond + */ +#ifndef _RA_H_ +#define _RA_H_ + +#define MINIMUM_HASH_SIZE (64) +#define FREE_TABLE_LIMIT (64) + +/* Defines whether sequential or random allocation is used */ +enum { + RA_SEQUENTIAL_ALLOCATION = 0, + RA_RANDOM_ALLOCATION, + RA_FORCE32BITS = 0x7FFFFFFFU +}; + +/* Defines boundary tag type */ +enum eboundary_tag_type { + RA_BOUNDARY_TAG_TYPE_SPAN = 0, + RA_BOUNDARY_TAG_TYPE_FREE, + RA_BOUNDARY_TAG_TYPE_LIVE, + RA_BOUNDARY_TAG_TYPE_MAX, + RA_BOUNDARY_FORCE32BITS = 0x7FFFFFFFU +}; + +/* + * @Description + * Boundary tags, used to describe a resource segment + * + * @enum0: span markers + * @enum1: free resource segment + * @enum2: allocated resource segment + * @enum3: max + * @base,size: The base resource of this segment and extent of this segment + * @nxt_seg, prv_seg: doubly linked ordered list of all segments + * within the arena + * @nxt_free, prv_free: doubly linked un-ordered list of free segments + * @reference : a user reference associated with this span, user + * references are currently only provided in + * the callback mechanism + */ +struct btag { + unsigned int bt_type; + unsigned long long base; + unsigned long long size; + struct btag *nxt_seg; + struct btag *prv_seg; + struct btag *nxt_free; + struct btag *prv_free; + void *ref; +}; + +/* + * @Description + * resource allocation arena + * + * @name: arena for diagnostics output + * @quantum: allocations within this arena are quantum sized + * @max_idx: index of the last position in the psBTHeadFree table, + * with available free space + * @import_alloc_fxn: import interface, if provided + * @import_free_fxn: import interface, if provided + * @import_hdnl: import interface, if provided + * @head_free: head of list of free boundary tags for indexed by Log2 + * of the boundary tag size. Power-of-two table of free lists + * @head_seg, tail_seg : resource ordered segment list + * @ps_hash : segment address to boundary tag hash table + */ +struct arena { + unsigned char *name; + unsigned long quantum; + unsigned int max_idx; + int (*import_alloc_fxn)(void *import_hdnl, + unsigned long long requested_size, + unsigned long long *actual_size, + void **ref, + unsigned int alloc_flags, + unsigned long long *base_addr); + int (*import_free_fxn)(void *import_hdnl, + unsigned long long base, + void *ref); + void *import_hdnl; + struct btag *head_free[FREE_TABLE_LIMIT]; + struct btag *head_seg; + struct btag *tail_seg; + struct hash *hash_tbl; +}; + +/* + * @Function vid_ra_init + * @Description Initializes the RA module. Must be called before any other + * ra API function + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_initialise(void); + +/* + * @Function vid_ra_deinit + * @Description Deinitializes the RA module + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_deinit(void); + +/* + * @Function vid_ra_create + * @Description Used to create a resource arena. + * @Input name: The name of the arena for diagnostic purposes + * @Input base_arg: The base of an initial resource span or 0 + * @Input size_arg: The size of an initial resource span or 0 + * @Input quantum: The arena allocation quantum + * @Input (*import_alloc_fxn): A resource allocation callback or NULL + * @Input (*import_free_fxn): A resource de-allocation callback or NULL + * @Input import_hdnl: Handle passed to alloc and free or NULL + * @Output arena_hndl: The handle for the arene being created, or NULL + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_create(const unsigned char * const name, + unsigned long long base_arg, + unsigned long long size_arg, + unsigned long quantum, + int (*import_alloc_fxn)(void * const import_hdnl, + unsigned long long req_sz, + unsigned long long * const actl_sz, + void ** const ref, + unsigned int alloc_flags, + unsigned long long * const base_arg), + int (*import_free_fxn)(void * const import_hdnl, + unsigned long long import_base, + void * const import_ref), + void *import_hdnl, + void **arena_hndl); + +/* + * @Function vid_ra_delete + * @Description Used to delete a resource arena. All resources allocated from + * the arena must be freed before deleting the arena + * @Input arena_hndl: The handle to the arena to delete + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_delete(void * const arena_hndl); + +/* + * @Function vid_ra_add + * @Description Used to add a resource span to an arena. The span must not + * overlap with any span previously added to the arena + * @Input base_arg: The base_arg of the span + * @Input size_arg: The size of the span + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_add(void * const arena_hndl, unsigned long long base_arg, unsigned long long size_arg); + +/* + * @Function vid_ra_alloc + * @Description Used to allocate resource from an arena + * @Input arena_hndl: The handle to the arena to create the resource + * @Input request_size: The requested size of resource segment + * @Input actl_size: The actualSize of resource segment + * @Input ref: The user reference associated with allocated resource + * span + * @Input alloc_flags: AllocationFlags influencing allocation policy + * @Input align_arg: The alignment constraint required for the allocated + * segment + * @Output base_args: The base of the allocated resource + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_alloc(void * const arena_hndl, + unsigned long long request_size, + unsigned long long * const actl_sz, + void ** const ref, + unsigned int alloc_flags, + unsigned long long align_arg, + unsigned long long * const base_arg); + +/* + * @Function vid_ra_free + * @Description Used to free a resource segment + * @Input arena_hndl: The arena the segment was originally allocated from + * @Input base_arg: The base of the span + * @Return IMG_SUCCESS or an error code + * + */ +int vid_ra_free(void * const arena_hndl, unsigned long long base_arg); + +#endif diff --git a/drivers/staging/media/vxd/common/talmmu_api.c b/drivers/staging/media/vxd/common/talmmu_api.c new file mode 100644 index 000000000000..04ddcc33505c --- /dev/null +++ b/drivers/staging/media/vxd/common/talmmu_api.c @@ -0,0 +1,753 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * TAL MMU Extensions. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Sidraya Jayagond + */ +#include +#include +#include +#include +#include +#include +#include + +#include "img_errors.h" +#include "lst.h" +#include "talmmu_api.h" + +static int global_init; +static struct lst_t gl_dmtmpl_lst = {0}; +static struct mutex *global_lock; + +static int talmmu_devmem_free(void *mem_hndl) +{ + struct talmmu_memory *mem = mem_hndl; + struct talmmu_devmem_heap *mem_heap; + + if (!mem_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + mem_heap = mem->devmem_heap; + + if (!mem->ext_dev_virtaddr) + addr_cx_free(&mem_heap->ctx, "", mem->dev_virtoffset); + + mutex_lock_nested(global_lock, SUBCLASS_TALMMU); + + lst_remove(&mem_heap->memory_list, mem); + + mutex_unlock(global_lock); + + kfree(mem); + + return IMG_SUCCESS; +} + +/* + * talmmu_devmem_heap_empty - talmmu_devmem_heap_empty + * @devmem_heap_hndl: device memory heap handle + * + * This function is used for emptying the device memory heap list + */ +int talmmu_devmem_heap_empty(void *devmem_heap_hndl) +{ + struct talmmu_devmem_heap *devmem_heap = devmem_heap_hndl; + + if (!devmem_heap) + return IMG_ERROR_INVALID_PARAMETERS; + + while (!lst_empty(&devmem_heap->memory_list)) + talmmu_devmem_free(lst_first(&devmem_heap->memory_list)); + + addr_cx_deinitialise(&devmem_heap->ctx); + + return IMG_SUCCESS; +} + +/* + * @Function talmmu_devmem_heap_destroy + * + * @Description This function is used for freeing the device memory heap + * + * @Input + * + * @Output + * + * @Return IMG_SUCCESS or an error code + * + */ +static void talmmu_devmem_heap_destroy(void *devmem_heap_hndl) +{ + struct talmmu_devmem_heap *devmem_heap = devmem_heap_hndl; + + talmmu_devmem_heap_empty(devmem_heap_hndl); + kfree(devmem_heap); +} + +/* + * @Function talmmu_init + * + * @Description This function is used to initialize the TALMMU component. + * + * @Input None. + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_init(void) +{ + if (!global_init) { + /* If no mutex associated with this resource */ + if (!global_lock) { + /* Create one */ + global_lock = kzalloc(sizeof(*global_lock), GFP_KERNEL); + if (!global_lock) + return IMG_ERROR_OUT_OF_MEMORY; + + mutex_init(global_lock); + } + + lst_init(&gl_dmtmpl_lst); + global_init = 1; + } + + return IMG_SUCCESS; +} + +/* + * @Function talmmu_deinit + * + * @Description This function is used to de-initialize the TALMMU component. + * + * @Input None. + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_deinit(void) +{ + struct talmmu_dm_tmpl *t; + + if (global_init) { + while (!lst_empty(&gl_dmtmpl_lst)) { + t = (struct talmmu_dm_tmpl *)lst_first(&gl_dmtmpl_lst); + talmmu_devmem_template_destroy((void *)t); + } + mutex_destroy(global_lock); + kfree(global_lock); + global_lock = NULL; + global_init = 0; + } + + return IMG_SUCCESS; +} + +/* + * @Function talmmu_devmem_template_create + * + * @Description This function is used to create a device memory template + * + * @Input devmem_info: A pointer to a talmmu_devmem_info structure. + * + * @Output devmem_template_hndl: A pointer used to return the template + * handle + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_devmem_template_create(struct talmmu_devmem_info *devmem_info, + void **devmem_template_hndl) +{ + struct talmmu_dm_tmpl *devmem_template; + struct talmmu_dm_tmpl *tmp_devmem_template; + + if (!devmem_info) + return IMG_ERROR_INVALID_PARAMETERS; + + devmem_template = kzalloc(sizeof(*devmem_template), GFP_KERNEL); + if (!devmem_template) + return IMG_ERROR_OUT_OF_MEMORY; + + devmem_template->devmem_info = *devmem_info; + + lst_init(&devmem_template->devmem_ctx_list); + + mutex_lock_nested(global_lock, SUBCLASS_TALMMU); + + tmp_devmem_template = lst_first(&gl_dmtmpl_lst); + while (tmp_devmem_template) + tmp_devmem_template = lst_next(tmp_devmem_template); + + devmem_template->page_num_shift = 12; + devmem_template->byte_in_pagemask = 0xFFF; + devmem_template->heap_alignment = 0x400000; + devmem_template->pagetable_entries_perpage = + (devmem_template->devmem_info.page_size / sizeof(unsigned int)); + devmem_template->pagetable_num_shift = 10; + devmem_template->index_in_pagetable_mask = 0x3FF; + devmem_template->pagedir_num_shift = 22; + + lst_add(&gl_dmtmpl_lst, devmem_template); + + mutex_unlock(global_lock); + + *devmem_template_hndl = devmem_template; + + return IMG_SUCCESS; +} + +/* + * @Function talmmu_devmem_template_destroy + * + * @Description This function is used to obtain the template from the list and + * destroy + * + * @Input devmem_tmplt_hndl: Device memory template handle + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_devmem_template_destroy(void *devmem_tmplt_hndl) +{ + struct talmmu_dm_tmpl *dm_tmpl = devmem_tmplt_hndl; + unsigned int i; + + if (!devmem_tmplt_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + while (!lst_empty(&dm_tmpl->devmem_ctx_list)) + talmmu_devmem_ctx_destroy(lst_first(&dm_tmpl->devmem_ctx_list)); + + for (i = 0; i < dm_tmpl->num_heaps; i++) + talmmu_devmem_heap_destroy(dm_tmpl->devmem_heap[i]); + + mutex_lock_nested(global_lock, SUBCLASS_TALMMU); + + lst_remove(&gl_dmtmpl_lst, dm_tmpl); + + mutex_unlock(global_lock); + + kfree(dm_tmpl); + + return IMG_SUCCESS; +} + +/* + * @Function talmmu_create_heap + * + * @Description This function is used to create a device memory heap + * + * @Input + * + * @Output + * + * @Return IMG_SUCCESS or an error code + * + */ +static int talmmu_create_heap(void *devmem_tmplt_hndl, + struct talmmu_heap_info *heap_info_arg, + unsigned char isfull, + struct talmmu_devmem_heap **devmem_heap_arg) +{ + struct talmmu_dm_tmpl *devmem_template = devmem_tmplt_hndl; + struct talmmu_devmem_heap *devmem_heap; + + /* Allocating memory for device memory heap */ + devmem_heap = kzalloc(sizeof(*devmem_heap), GFP_KERNEL); + if (!devmem_heap) + return IMG_ERROR_OUT_OF_MEMORY; + + /* + * Update the device memory heap structure members + * Update the device memory template + */ + devmem_heap->devmem_template = devmem_template; + /* Update the device memory heap information */ + devmem_heap->heap_info = *heap_info_arg; + + /* Initialize the device memory heap list */ + lst_init(&devmem_heap->memory_list); + + /* If full structure required */ + if (isfull) { + addr_cx_initialise(&devmem_heap->ctx); + devmem_heap->regions.base_addr = 0; + devmem_heap->regions.size = devmem_heap->heap_info.size; + addr_cx_define_mem_region(&devmem_heap->ctx, + &devmem_heap->regions); + } + + *devmem_heap_arg = devmem_heap; + + return IMG_SUCCESS; +} + +/* + * @Function talmmu_devmem_heap_add + * + * @Description This function is for creating and adding the heap to the + * device memory template + * + * @Input devmem_tmplt_hndl: device memory template handle + * + * @Input heap_info_arg: pointer to the heap info structure + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_devmem_heap_add(void *devmem_tmplt_hndl, + struct talmmu_heap_info *heap_info_arg) +{ + struct talmmu_dm_tmpl *devmem_template = devmem_tmplt_hndl; + struct talmmu_devmem_heap *devmem_heap; + unsigned int res; + + if (!devmem_tmplt_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + if (!heap_info_arg) + return IMG_ERROR_INVALID_PARAMETERS; + + res = talmmu_create_heap(devmem_tmplt_hndl, + heap_info_arg, + 1, + &devmem_heap); + if (res != IMG_SUCCESS) + return res; + + devmem_template->devmem_heap[devmem_template->num_heaps] = devmem_heap; + devmem_template->num_heaps++; + + return IMG_SUCCESS; +} + +/* + * @Function talmmu_devmem_ctx_create + * + * @Description This function is used to create a device memory context + * + * @Input devmem_tmplt_hndl: pointer to the device memory template handle + * + * @Input mmu_ctx_id: MMU context ID used with the TAL + * + * @Output devmem_ctx_hndl: pointer to the device memory context handle + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_devmem_ctx_create(void *devmem_tmplt_hndl, + unsigned int mmu_ctx_id, + void **devmem_ctx_hndl) +{ + struct talmmu_dm_tmpl *dm_tmpl = devmem_tmplt_hndl; + struct talmmu_devmem_ctx *dm_ctx; + struct talmmu_devmem_heap *dm_heap; + int i; + unsigned int res = IMG_SUCCESS; + + if (!devmem_tmplt_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + /* Allocate memory for device memory context */ + dm_ctx = kzalloc((sizeof(struct talmmu_devmem_ctx)), GFP_KERNEL); + if (!dm_ctx) + return IMG_ERROR_OUT_OF_MEMORY; + + /* + * Update the device memory context structure members + * Update the device memory template + */ + dm_ctx->devmem_template = dm_tmpl; + /* Update MMU context ID */ + dm_ctx->mmu_ctx_id = mmu_ctx_id; + + /* Check for PTD Alignment */ + if (dm_tmpl->devmem_info.ptd_alignment == 0) + /* + * Make sure alignment is a multiple of page size. + * Set up PTD alignment to Page Size + */ + dm_tmpl->devmem_info.ptd_alignment = + dm_tmpl->devmem_info.page_size; + + /* Reference or create heaps for this context */ + for (i = 0; i < dm_tmpl->num_heaps; i++) { + dm_heap = dm_tmpl->devmem_heap[i]; + if (!dm_heap) + goto error_heap_create; + + switch (dm_heap->heap_info.heap_type) { + case TALMMU_HEAP_PERCONTEXT: + res = talmmu_create_heap(dm_tmpl, + &dm_heap->heap_info, + 1, + &dm_ctx->devmem_heap[i]); + if (res != IMG_SUCCESS) + goto error_heap_create; + break; + + default: + break; + } + + dm_ctx->num_heaps++; + } + + mutex_lock_nested(global_lock, SUBCLASS_TALMMU); + + /* Add the device memory context to the list */ + lst_add(&dm_tmpl->devmem_ctx_list, dm_ctx); + + dm_tmpl->num_ctxs++; + + mutex_unlock(global_lock); + + *devmem_ctx_hndl = dm_ctx; + + return IMG_SUCCESS; + +error_heap_create: + /* Destroy the device memory heaps which were already created */ + for (i--; i >= 0; i--) { + dm_heap = dm_ctx->devmem_heap[i]; + if (dm_heap->heap_info.heap_type == TALMMU_HEAP_PERCONTEXT) + talmmu_devmem_heap_destroy(dm_heap); + + dm_ctx->num_heaps--; + } + kfree(dm_ctx); + return res; +} + +/* + * @Function talmmu_devmem_ctx_destroy + * + * @Description This function is used to get the device memory context from + * the list and destroy + * + * @Input devmem_ctx_hndl: device memory context handle + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_devmem_ctx_destroy(void *devmem_ctx_hndl) +{ + struct talmmu_devmem_ctx *devmem_ctx = devmem_ctx_hndl; + struct talmmu_dm_tmpl *devmem_template; + struct talmmu_devmem_heap *devmem_heap; + unsigned int i; + + if (!devmem_ctx_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + devmem_template = devmem_ctx->devmem_template; + + for (i = 0; i < devmem_ctx->num_heaps; i++) { + devmem_heap = devmem_ctx->devmem_heap[i]; + if (!devmem_heap) + return IMG_ERROR_INVALID_PARAMETERS; + + talmmu_devmem_heap_destroy(devmem_heap); + } + + devmem_ctx->pagedir = NULL; + + mutex_lock_nested(global_lock, SUBCLASS_TALMMU); + + lst_remove(&devmem_template->devmem_ctx_list, devmem_ctx); + + devmem_ctx->devmem_template->num_ctxs--; + + mutex_unlock(global_lock); + + kfree(devmem_ctx); + + return IMG_SUCCESS; +} + +/* + * @Function talmmu_get_heap_handle + * + * @Description This function is used to get the device memory heap handle + * + * @Input hid: heap id + * + * @Input devmem_ctx_hndl: device memory context handle + * + * @Output devmem_heap_hndl: pointer to the device memory heap handle + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_get_heap_handle(unsigned int hid, + void *devmem_ctx_hndl, + void **devmem_heap_hndl) +{ + struct talmmu_devmem_ctx *devmem_ctx = devmem_ctx_hndl; + unsigned int i; + + if (!devmem_ctx_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + for (i = 0; i < devmem_ctx->num_heaps; i++) { + /* + * Checking for requested heap id match and return the device + * memory heap handle + */ + if (devmem_ctx->devmem_heap[i]->heap_info.heap_id == hid) { + *devmem_heap_hndl = devmem_ctx->devmem_heap[i]; + return IMG_SUCCESS; + } + } + + return IMG_ERROR_GENERIC_FAILURE; +} + +/* + * @Function talmmu_devmem_heap_options + * + * @Description This function is used to set additional heap options + * + * @Input devmem_heap_hndl: Handle for heap + * + * @Input heap_opt_id: Heap options ID + * + * @Input heap_options: Heap options + * + * @Return IMG_SUCCESS or an error code + * + */ +void talmmu_devmem_heap_options(void *devmem_heap_hndl, + enum talmmu_heap_option_id heap_opt_id, + union talmmu_heap_options heap_options) +{ + struct talmmu_devmem_heap *dm_heap = devmem_heap_hndl; + + switch (heap_opt_id) { + case TALMMU_HEAP_OPT_ADD_GUARD_BAND: + dm_heap->guardband = heap_options.guardband_opt.guardband; + break; + default: + break; + } +} + +/* + * @Function talmmu_devmem_malloc_nonmap + * + * @Description + * + * @Input + * + * @Output + * + * @Return IMG_SUCCESS or an error code + * + */ +static int talmmu_devmem_alloc_nonmap(void *devmem_ctx_hndl, + void *devmem_heap_hndl, + unsigned int size, + unsigned int align, + unsigned int dev_virt_ofset, + unsigned char ext_dev_vaddr, + void **mem_hndl) +{ + struct talmmu_devmem_ctx *dm_ctx = devmem_ctx_hndl; + struct talmmu_dm_tmpl *dm_tmpl; + struct talmmu_devmem_heap *dm_heap = devmem_heap_hndl; + struct talmmu_memory *mem; + unsigned long long ui64_dev_offset = 0; + int res = IMG_SUCCESS; + + if (!dm_ctx) + return IMG_ERROR_INVALID_PARAMETERS; + + if (!devmem_heap_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + dm_tmpl = dm_ctx->devmem_template; + + /* Allocate memory for memory structure */ + mem = kzalloc((sizeof(struct talmmu_memory)), GFP_KERNEL); + if (!mem) + return IMG_ERROR_OUT_OF_MEMORY; + + mem->devmem_heap = dm_heap; + mem->devmem_ctx = dm_ctx; + mem->ext_dev_virtaddr = ext_dev_vaddr; + + /* We always for to be at least page aligned */ + if (align >= dm_tmpl->devmem_info.page_size) + /* + * alignment is larger than page size - make sure alignment is + * a multiple of page size + */ + mem->alignment = align; + else + /* + * alignment is smaller than page size - make sure page size is + * a multiple of alignment. Now round up alignment to one page + */ + mem->alignment = dm_tmpl->devmem_info.page_size; + + /* Round size up to next multiple of physical pages */ + if ((size % dm_tmpl->devmem_info.page_size) != 0) + mem->size = ((size / dm_tmpl->devmem_info.page_size) + + 1) * dm_tmpl->devmem_info.page_size; + else + mem->size = size; + + /* If the device virtual address was externally defined */ + if (mem->ext_dev_virtaddr) { + res = IMG_ERROR_INVALID_PARAMETERS; + goto free_mem; + } + + res = addr_cx_malloc_align_res(&dm_heap->ctx, "", + (mem->size + dm_heap->guardband), + mem->alignment, + &ui64_dev_offset); + + mem->dev_virtoffset = (unsigned int)ui64_dev_offset; + if (res != IMG_SUCCESS) + /* + * If heap space is unavaliable return NULL, the caller must + * handle this condition + */ + goto free_virt; + + mutex_lock_nested(global_lock, SUBCLASS_TALMMU); + + /* + * Add memory allocation to the list for this heap... + * If the heap is empty... + */ + if (lst_empty(&dm_heap->memory_list)) + /* + * Save flag to indicate whether the device virtual address + * is allocated internally or externally... + */ + dm_heap->ext_dev_virtaddr = mem->ext_dev_virtaddr; + + /* + * Once we have started allocating in one way ensure that we continue + * to do this... + */ + lst_add(&dm_heap->memory_list, mem); + + mutex_unlock(global_lock); + + *mem_hndl = mem; + + return IMG_SUCCESS; + +free_virt: + addr_cx_free(&dm_heap->ctx, "", mem->dev_virtoffset); +free_mem: + kfree(mem); + + return res; +} + +/* + * @Function talmmu_devmem_addr_alloc + * + * @Description + * + * @Input + * + * @Output + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_devmem_addr_alloc(void *devmem_ctx_hndl, + void *devmem_heap_hndl, + unsigned int size, + unsigned int align, + void **mem_hndl) +{ + unsigned int res; + void *mem; + + res = talmmu_devmem_alloc_nonmap(devmem_ctx_hndl, + devmem_heap_hndl, + size, + align, + 0, + 0, + &mem); + if (res != IMG_SUCCESS) + return res; + + *mem_hndl = mem; + + return IMG_SUCCESS; +} + +/* + * @Function talmmu_devmem_addr_free + * + * @Description This function is used to free device memory allocated using + * talmmu_devmem_addr_alloc(). + * + * @Input mem_hndl : Handle for the memory object + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_devmem_addr_free(void *mem_hndl) +{ + unsigned int res; + + if (!mem_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + /* free device memory allocated by calling talmmu_devmem_free() */ + res = talmmu_devmem_free(mem_hndl); + + return res; +} + +/* + * @Function talmmu_get_dev_virt_addr + * + * @Description This function is use to obtain the device (virtual) memory + * address which may be required for as a device virtual address + * in some of the TAL image functions + * + * @Input mem_hndl : Handle for the memory object + * + * @Output dev_virt: A piointer used to return the device virtual address + * + * @Return IMG_SUCCESS or an error code + * + */ +int talmmu_get_dev_virt_addr(void *mem_hndl, + unsigned int *dev_virt) +{ + struct talmmu_memory *mem = mem_hndl; + struct talmmu_devmem_heap *devmem_heap; + + if (!mem_hndl) + return IMG_ERROR_INVALID_PARAMETERS; + + devmem_heap = mem->devmem_heap; + + /* + * Device virtual address is addition of the specific device virtual + * offset and the base device virtual address from the heap information + */ + *dev_virt = (devmem_heap->heap_info.basedev_virtaddr + + mem->dev_virtoffset); + + return IMG_SUCCESS; +} diff --git a/drivers/staging/media/vxd/common/talmmu_api.h b/drivers/staging/media/vxd/common/talmmu_api.h new file mode 100644 index 000000000000..f37f78394d54 --- /dev/null +++ b/drivers/staging/media/vxd/common/talmmu_api.h @@ -0,0 +1,246 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * TAL MMU Extensions. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Lakshmi Sankar + * + * Re-written for upstream + * Sidraya Jayagond + */ +#include "addr_alloc.h" +#include "ra.h" +#include "lst.h" + +#ifndef __TALMMU_API_H__ +#define __TALMMU_API_H__ + +#define TALMMU_MAX_DEVICE_HEAPS (32) +#define TALMMU_MAX_TEMPLATES (32) + +/* MMU type */ +enum talmmu_mmu_type { + /* 4kb pages and 32-bit address range */ + TALMMU_MMUTYPE_4K_PAGES_32BIT_ADDR = 0x1, + /* variable size pages and 32-bit address */ + TALMMU_MMUTYPE_VAR_PAGES_32BIT_ADDR, + /* 4kb pages and 36-bit address range */ + TALMMU_MMUTYPE_4K_PAGES_36BIT_ADDR, + /* 4kb pages and 40-bit address range */ + TALMMU_MMUTYPE_4K_PAGES_40BIT_ADDR, + /* variable size pages and 40-bit address range */ + TALMMU_MMUTYPE_VP_40BIT, + TALMMU_MMUTYPE_FORCE32BITS = 0x7FFFFFFFU +}; + +/* Device flags */ +enum talmmu_dev_flags { + TALMMU_DEVFLAGS_NONE = 0x0, + TALMMU_DEVFLAGS_FORCE32BITS = 0x7FFFFFFFU +}; + +/* Heap type */ +enum talmmu_heap_type { + TALMMU_HEAP_SHARED_EXPORTED, + TALMMU_HEAP_PERCONTEXT, + TALMMU_HEAP_FORCE32BITS = 0x7FFFFFFFU +}; + +/* Heap flags */ +enum talmmu_eheapflags { + TALMMU_HEAPFLAGS_NONE = 0x0, + TALMMU_HEAPFLAGS_SET_CACHE_CONSISTENCY = 0x00000001, + TALMMU_HEAPFLAGS_128BYTE_INTERLEAVE = 0x00000002, + TALMMU_HEAPFLAGS_256BYTE_INTERLEAVE = 0x00000004, + TALMMU_HEAPFLAGS_FORCE32BITS = 0x7FFFFFFFU +}; + +/* Contains the device memory information */ +struct talmmu_devmem_info { + /* device id */ + unsigned int device_id; + /* mmu type */ + enum talmmu_mmu_type mmu_type; + /* Device flags - bit flags that can be combined */ + enum talmmu_dev_flags dev_flags; + /* Name of the memory space for page directory allocations */ + unsigned char *pagedir_memspace_name; + /* Name of the memory space for page table allocations */ + unsigned char *pagetable_memspace_name; + /* Page size in bytes */ + unsigned int page_size; + /* PTD alignment, must be multiple of Page size */ + unsigned int ptd_alignment; +}; + +struct talmmu_heap_info { + /* heap id */ + unsigned int heap_id; + /* heap type */ + enum talmmu_heap_type heap_type; + /* heap flags - bit flags that can be combined */ + enum talmmu_eheapflags heap_flags; + /* Name of the memory space for memory allocations */ + unsigned char *memspace_name; + /* Base device virtual address */ + unsigned int basedev_virtaddr; + /* size in bytes */ + unsigned int size; +}; + +/* Device memory template information */ +struct talmmu_dm_tmpl { + /* list */ + struct lst_t list; + /* Copy of device memory info structure */ + struct talmmu_devmem_info devmem_info; + /* Memory space ID for PTD allocations */ + void *ptd_memspace_hndl; + /* Memory space ID for Page Table allocations */ + void *ptentry_memspace_hndl; + /* number of heaps */ + unsigned int num_heaps; + /* Array of heap pointers */ + struct talmmu_devmem_heap *devmem_heap[TALMMU_MAX_DEVICE_HEAPS]; + /* Number of active contexts */ + unsigned int num_ctxs; + /* List of device memory context created from this template */ + struct lst_t devmem_ctx_list; + /* Number of bits to shift right to obtain page number */ + unsigned int page_num_shift; + /* Mask to extract byte-within-page */ + unsigned int byte_in_pagemask; + /* Heap alignment */ + unsigned int heap_alignment; + /* Page table entries/page */ + unsigned int pagetable_entries_perpage; + /* Number of bits to shift right to obtain page table number */ + unsigned int pagetable_num_shift; + /* Mask to extract index-within-page-table */ + unsigned int index_in_pagetable_mask; + /* Number of bits to shift right to obtain page dir number */ + unsigned int pagedir_num_shift; +}; + +/* Device memory heap information */ +struct talmmu_devmem_heap { + /* list item */ + struct lst_t list; + /* Copy of the heap info structure */ + struct talmmu_heap_info heap_info; + /* Pointer to the device memory template */ + struct talmmu_dm_tmpl *devmem_template; + /* true if device virtual address offset allocated externally by user */ + unsigned int ext_dev_virtaddr; + /* list of memory allocations */ + struct lst_t memory_list; + /* Memory space ID for memory allocations */ + void *memspace_hndl; + /* Address context structure */ + struct addr_context ctx; + /* Regions structure */ + struct addr_region regions; + /* size of heap guard band */ + unsigned int guardband; +}; + +struct talmmu_devmem_ctx { + /* list item */ + struct lst_t list; + /* Pointer to device template */ + struct talmmu_dm_tmpl *devmem_template; + /* No. of heaps */ + unsigned int num_heaps; + /* Array of heap pointers */ + struct talmmu_devmem_heap *devmem_heap[TALMMU_MAX_DEVICE_HEAPS]; + /* The MMU context id */ + unsigned int mmu_ctx_id; + /* Pointer to the memory that represents Page directory */ + unsigned int *pagedir; +}; + +struct talmmu_memory { + /* list item */ + struct lst_t list; + /* Heap from which memory was allocated */ + struct talmmu_devmem_heap *devmem_heap; + /* Context through which memory was allocated */ + struct talmmu_devmem_ctx *devmem_ctx; + /* size */ + unsigned int size; + /* alignment */ + unsigned int alignment; + /* device virtual offset of allocation */ + unsigned int dev_virtoffset; + /* true if device virtual address offset allocated externally by user */ + unsigned int ext_dev_virtaddr; +}; + +/* This type defines the event types for the TALMMU callbacks */ +enum talmmu_event { + /* Function to flush the cache. */ + TALMMU_EVENT_FLUSH_CACHE, + /*! Function to write the page directory address to the device */ + TALMMU_EVENT_WRITE_PAGE_DIRECTORY_REF, + /* Placeholder*/ + TALMMU_NO_OF_EVENTS +}; + +enum talmmu_heap_option_id { + /* Add guard band to all mallocs */ + TALMMU_HEAP_OPT_ADD_GUARD_BAND, + TALMMU_HEAP_OPT_SET_MEM_ATTRIB, + TALMMU_HEAP_OPT_SET_MEM_POOL, + + /* Placeholder */ + TALMMU_NO_OF_OPTIONS, + TALMMU_NO_OF_FORCE32BITS = 0x7FFFFFFFU +}; + +struct talmmu_guardband_options { + unsigned int guardband; +}; + +union talmmu_heap_options { + /* Guardband parameters */ + struct talmmu_guardband_options guardband_opt; +}; + +int talmmu_init(void); +int talmmu_deinit(void); +int talmmu_devmem_template_create(struct talmmu_devmem_info *devmem_info, + void **devmem_template_hndl); +int talmmu_devmem_heap_add(void *devmem_tmplt_hndl, + struct talmmu_heap_info *heap_info_arg); +int talmmu_devmem_template_destroy(void *devmem_tmplt_hndl); +int talmmu_devmem_ctx_create(void *devmem_tmplt_hndl, + unsigned int mmu_ctx_id, + void **devmem_ctx_hndl); +int talmmu_devmem_ctx_destroy(void *devmem_ctx_hndl); +int talmmu_get_heap_handle(unsigned int hid, + void *devmem_ctx_hndl, + void **devmem_heap_hndl); +/** + * talmmu_devmem_heap_empty - talmmu_devmem_heap_empty + * @devmem_heap_hndl: device memory heap handle + * + * This function is used for emptying the device memory heap list + */ + +int talmmu_devmem_heap_empty(void *devmem_heap_hndl); +void talmmu_devmem_heap_options(void *devmem_heap_hndl, + enum talmmu_heap_option_id heap_opt_id, + union talmmu_heap_options heap_options); +int talmmu_devmem_addr_alloc(void *devmem_ctx_hndl, + void *devmem_heap_hndl, + unsigned int size, + unsigned int align, + void **mem_hndl); +int talmmu_devmem_addr_free(void *mem_hndl); +int talmmu_get_dev_virt_addr(void *mem_hndl, + unsigned int *dev_virt); + +#endif /* __TALMMU_API_H__ */ From patchwork Wed Aug 18 14:10:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DDC9C43216 for ; Wed, 18 Aug 2021 14:15:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 466A360560 for ; Wed, 18 Aug 2021 14:15:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239519AbhHROQ3 (ORCPT ); Wed, 18 Aug 2021 10:16:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239658AbhHROOM (ORCPT ); Wed, 18 Aug 2021 10:14:12 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35E96C06124C for ; Wed, 18 Aug 2021 07:12:48 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id d17so1847832plr.12 for ; Wed, 18 Aug 2021 07:12:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lTn1tv+cdEEWthxb7x9eEJtLz7gLrC1ZYvDMpUK9UtU=; b=W7A9o60SXqg+Pm4Zvg+9FQMtg8rSHOBJI2nwglb095hB8O0O3ovYoar36huZ3ZbVvZ TQLyUwxaWhzDXaeI1n1otLRDy6oUnycYug+gRKjZ7iw8S1jiblO5z6p5QvmBnlffSwDI dsjUY/XVbsTIDJ8i+raMaLxYpb8iPt+QSBjsCeOx9skX44a/WLaXS8kSTuCThwkkooUa THYDlWbMfPdEMEqbmyQIOFHtF5vvISKphCjJkR8cDk5QAnNsQ+tOrB7YztWxiuYNcCmt cpCHGcsSdnDBirK1oYAfPtmy2zdgHsG39YaPq4GLdjjAgyvDb0vavVclE4BbUNvrjQMG BU9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=lTn1tv+cdEEWthxb7x9eEJtLz7gLrC1ZYvDMpUK9UtU=; b=UhgifBVLYRCcJwV3YZj7cuA6ZAXz9gueFESsFkDdNsUvpJbleSwSCI8iEeopM4X9UC mK/XP7tGpnSrTqmKKNCmX5uoopSzogdi8mC1o/1fT/LLM2hqzndrU9zu8MtMQXoOCELK byjPEdrNlGgtSXcaNy4JcW7kbPiS6uXLuUkssrscvZE/QDe6rySeNQMq4rBDt6bC+icZ ERMOO1RW74mYFK/8djGLAVuVooQWP+82dDjgJtahAoD4JIAivMJCyt5yCy4OHcikyGyT y/m64B1Aj4dcPwysgBHXNYgg5hUl0D0Gl8JOidy15YUcXxnOELkf4D/XFqzEaDKK7rPA biCw== MIME-Version: 1.0 X-Gm-Message-State: AOAM532CETmJCD4x8AHYaeXIEah0DeseFwVtgI6aiCVoURkAeR/3J/pr NHfujGgfaPo5EkqwofgHJJ/tTrtRS9tBgJKqTcb1RRg+NRzd6vaxwHDcmnXonRb3pBNeWA0CEE3 wQZr23fBUplMnt7cW X-Google-Smtp-Source: ABdhPJzWE4IEG6p1dUMaTevQk3xKy4dKqZZPdBdTIF+8HzvDBttle5+iDu25KKfVjahs11LvahUa9w== X-Received: by 2002:a17:90a:c8b:: with SMTP id v11mr9330341pja.114.1629295967576; Wed, 18 Aug 2021 07:12:47 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.12.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:12:47 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 16/30] v4l: vxd-dec: Add pool api modules Date: Wed, 18 Aug 2021 19:40:23 +0530 Message-Id: <20210818141037.19990-17-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya This patch create and destroy the pool of the resources and it manages the allocation and free of the resources. Signed-off-by: Amit Makani Signed-off-by: Sidraya --- MAINTAINERS | 2 + drivers/staging/media/vxd/common/pool_api.c | 709 ++++++++++++++++++++ drivers/staging/media/vxd/common/pool_api.h | 113 ++++ 3 files changed, 824 insertions(+) create mode 100644 drivers/staging/media/vxd/common/pool_api.c create mode 100644 drivers/staging/media/vxd/common/pool_api.h diff --git a/MAINTAINERS b/MAINTAINERS index a00ac0852b2a..f7e55791f355 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19556,6 +19556,8 @@ F: drivers/staging/media/vxd/common/lst.c F: drivers/staging/media/vxd/common/lst.h F: drivers/staging/media/vxd/common/pool.c F: drivers/staging/media/vxd/common/pool.h +F: drivers/staging/media/vxd/common/pool_api.c +F: drivers/staging/media/vxd/common/pool_api.h F: drivers/staging/media/vxd/common/ra.c F: drivers/staging/media/vxd/common/ra.h F: drivers/staging/media/vxd/common/talmmu_api.c diff --git a/drivers/staging/media/vxd/common/pool_api.c b/drivers/staging/media/vxd/common/pool_api.c new file mode 100644 index 000000000000..68d960a687da --- /dev/null +++ b/drivers/staging/media/vxd/common/pool_api.c @@ -0,0 +1,709 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Resource pool manager API. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "idgen_api.h" +#include "lst.h" +#include "pool_api.h" + +/* + * list can be modified by different instances. So please, + * make sure to acquire mutex lock before initializing the list. + */ +static struct mutex *shared_res_mutex_handle; + +/* + * Max resource ID's. + */ +#define POOL_IDGEN_MAX_ID (0xFFFFFFFF) +/* + * Size of blocks used for ID's. + */ +#define POOL_IDGEN_BLOCK_SIZE (50) + +/* + * Indicates if the pool API has been indialized or not. + * zero if not done. 1 if done. + */ +static int poolinitdone; + +/* list of resource pool */ +static struct lst_t poollist = {0}; + +/** + * struct poollist - Structure contains resource list information. + * @link: to be able to part of single linked list + * @pool_mutex: lock + * @freereslst: list of free resource structure + * @actvreslst: list of active resource structure + * @pfnfree: pool free callback function + * @idgenhandle: ID generator context handl + */ +struct poollist { + void **link; + struct mutex *pool_mutex; /* Mutex lock */ + struct lst_t freereslst; + struct lst_t actvreslst; + pfrecalbkpntr pfnfree; + void *idgenhandle; +}; + +/* + * This structure contains pool resource. + */ +struct poolres { + void **link; /* to be able to part of single linked list */ + /* Resource id */ + unsigned int resid; + /* Pointer to destructor function */ + pdestcallbkptr desfunc; + /* resource param */ + void *resparam; + /* size of resource param in bytes */ + unsigned int resparmsize; + /* pointer to resource pool list */ + struct poollist *respoollst; + /* 1 if this is a clone of the original resource */ + int isclone; + /* pointer to original resource */ + struct poolres *origres; + /* list of cloned resource structures. Only used on the original */ + struct lst_t clonereslst; + /* reference count. Only used on the original resource */ + unsigned int refcnt; + void *cb_handle; +}; + +/* + * This function initializes the list if not done earlier. + */ +int pool_init(void) +{ + /* Check if list already initialized */ + if (!poolinitdone) { + /* + * list can be modified by different instances. So please, + * make sure to acquire mutex lock before initializing the list. + */ + + shared_res_mutex_handle = kzalloc(sizeof(*shared_res_mutex_handle), GFP_KERNEL); + if (!shared_res_mutex_handle) + return -ENOMEM; + + mutex_init(shared_res_mutex_handle); + + /* initialize the list of pools */ + lst_init(&poollist); + /* Get initialized flag to true */ + poolinitdone = 1; + } + + return 0; +} + +/* + * This function de-initializes the list. + */ +void pool_deinit(void) +{ + struct poollist *respoollist; + + /* Check if list initialized */ + if (poolinitdone) { + /* destroy any active pools */ + respoollist = (struct poollist *)lst_first(&poollist); + while (respoollist) { + pool_destroy(respoollist); + respoollist = (struct poollist *)lst_first(&poollist); + } + + /* Destroy mutex */ + mutex_destroy(shared_res_mutex_handle); + kfree(shared_res_mutex_handle); + shared_res_mutex_handle = NULL; + + /* set initialized flag to 0 */ + poolinitdone = 0; + } +} + +/* + * This function creates pool. + */ +int pool_api_create(void **poolhndle) +{ + struct poollist *respoollist; + unsigned int result = 0; + + /* Allocate a pool structure */ + respoollist = kzalloc(sizeof(*respoollist), GFP_KERNEL); + if (!respoollist) + return IMG_ERROR_OUT_OF_MEMORY; + + /* Initialize the pool info */ + lst_init(&respoollist->freereslst); + lst_init(&respoollist->actvreslst); + + /* Create mutex */ + respoollist->pool_mutex = kzalloc(sizeof(*respoollist->pool_mutex), GFP_KERNEL); + if (!respoollist->pool_mutex) { + result = ENOMEM; + goto error_create_context; + } + mutex_init(respoollist->pool_mutex); + + /* Create context for the Id generator */ + result = idgen_createcontext(POOL_IDGEN_MAX_ID, + POOL_IDGEN_BLOCK_SIZE, 0, + &respoollist->idgenhandle); + if (result != IMG_SUCCESS) + goto error_create_context; + + /* Disable interrupts */ + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_POOL_RES); + + /* Add to list of pools */ + lst_add(&poollist, respoollist); + + /* Enable interrupts */ + mutex_unlock(shared_res_mutex_handle); + + /* Return handle to pool */ + *poolhndle = respoollist; + + return IMG_SUCCESS; + + /* Error handling. */ +error_create_context: + kfree(respoollist); + + return result; +} + +/* + * This function destroys the pool. + */ +int pool_destroy(void *poolhndle) +{ + struct poollist *respoollist = poolhndle; + struct poolres *respool; + struct poolres *clonerespool; + unsigned int result = 0; + + if (!poolinitdone || !respoollist) { + result = IMG_ERROR_INVALID_PARAMETERS; + goto error_nolock; + } + + /* Lock the pool */ + mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL); + + /* Disable interrupts */ + /* + * We need to check if we really need to check disable, + * interrupts because before deleting we need to make sure the + * pool lst is not being used other process. As of now getting ipl + * global mutex + */ + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_POOL_RES); + + /* Remove the pool from the active list */ + lst_remove(&poollist, respoollist); + + /* Enable interrupts */ + mutex_unlock(shared_res_mutex_handle); + + /* Destroy any resources in the free list */ + respool = (struct poolres *)lst_removehead(&respoollist->freereslst); + while (respool) { + respool->desfunc(respool->resparam, respool->cb_handle); + kfree(respool); + respool = (struct poolres *) + lst_removehead(&respoollist->freereslst); + } + + /* Destroy any resources in the active list */ + respool = (struct poolres *)lst_removehead(&respoollist->actvreslst); + while (respool) { + clonerespool = (struct poolres *) + lst_removehead(&respool->clonereslst); + while (clonerespool) { + /* + * If we created a copy of the resources pvParam + * then free it. + * kfree(NULL) is safe and this check is probably not + * required + */ + kfree(clonerespool->resparam); + + kfree(clonerespool); + clonerespool = (struct poolres *) + lst_removehead(&respool->clonereslst); + } + + /* Call the resource destructor */ + respool->desfunc(respool->resparam, respool->cb_handle); + kfree(respool); + respool = (struct poolres *) + lst_removehead(&respoollist->actvreslst); + } + /* Destroy the context for the Id generator */ + if (respoollist->idgenhandle) + result = idgen_destroycontext(respoollist->idgenhandle); + + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + + /* Destroy mutex */ + mutex_destroy(respoollist->pool_mutex); + kfree(respoollist->pool_mutex); + respoollist->pool_mutex = NULL; + + /* Free the pool structure */ + kfree(respoollist); + + return IMG_SUCCESS; + +error_nolock: + return result; +} + +int pool_setfreecalbck(void *poolhndle, pfrecalbkpntr pfnfree) +{ + struct poollist *respoollist = poolhndle; + struct poolres *respool; + unsigned int result = 0; + + if (!poolinitdone || !respoollist) { + result = IMG_ERROR_INVALID_PARAMETERS; + goto error_nolock; + } + + /* Lock the pool */ + mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL); + + respoollist->pfnfree = pfnfree; + + /* If free callback set */ + if (respoollist->pfnfree) { + /* Move resources from free to active list */ + respool = (struct poolres *) + lst_removehead(&respoollist->freereslst); + while (respool) { + /* Add to active list */ + lst_add(&respoollist->actvreslst, respool); + respool->refcnt++; + + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + + /* Call the free callback */ + respoollist->pfnfree(respool->resid, respool->resparam); + + /* Lock the pool */ + mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL); + + /* Get next free resource */ + respool = (struct poolres *) + lst_removehead(&respoollist->freereslst); + } + } + + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + + /* Return IMG_SUCCESS */ + return IMG_SUCCESS; + +error_nolock: + return result; +} + +int pool_resreg(void *poolhndle, pdestcallbkptr fndestructor, + void *resparam, unsigned int resparamsize, + int balloc, unsigned int *residptr, + void **poolreshndle, void *cb_handle) +{ + struct poollist *respoollist = poolhndle; + struct poolres *respool; + unsigned int result = 0; + + if (!poolinitdone || !respoollist) { + result = IMG_ERROR_INVALID_PARAMETERS; + goto error_nolock; + } + + /* Allocate a resource structure */ + respool = kzalloc(sizeof(*respool), GFP_KERNEL); + if (!respool) + return IMG_ERROR_OUT_OF_MEMORY; + + /* Setup the resource */ + respool->desfunc = fndestructor; + respool->cb_handle = cb_handle; + respool->resparam = resparam; + respool->resparmsize = resparamsize; + respool->respoollst = respoollist; + lst_init(&respool->clonereslst); + + /* Lock the pool */ + mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL); + + /* Set resource id */ + result = idgen_allocid(respoollist->idgenhandle, + (void *)respool, &respool->resid); + if (result != IMG_SUCCESS) { + kfree(respool); + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + return result; + } + + /* If allocated or free callback not set */ + if (balloc || respoollist->pfnfree) { + /* Add to active list */ + lst_add(&respoollist->actvreslst, respool); + respool->refcnt++; + } else { + /* Add to free list */ + lst_add(&respoollist->freereslst, respool); + } + + /* Return the resource id */ + if (residptr) + *residptr = respool->resid; + + /* Return the handle to the resource */ + if (poolreshndle) + *poolreshndle = respool; + + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + + /* If free callback set */ + if (respoollist->pfnfree) { + /* Call the free callback */ + respoollist->pfnfree(respool->resid, respool->resparam); + } + + /* Return IMG_SUCCESS */ + return IMG_SUCCESS; + +error_nolock: + return result; +} + +int pool_resdestroy(void *poolreshndle, int bforce) +{ + struct poolres *respool = poolreshndle; + struct poollist *respoollist; + struct poolres *origrespool; + unsigned int result = 0; + + if (!poolinitdone || !respool) { + result = IMG_ERROR_INVALID_PARAMETERS; + goto error_nolock; + } + + respoollist = respool->respoollst; + + /* If this is a clone */ + if (respool->isclone) { + /* Get access to the original */ + origrespool = respool->origres; + if (!origrespool) { + result = IMG_ERROR_UNEXPECTED_STATE; + goto error_nolock; + } + + if (origrespool->isclone) { + result = IMG_ERROR_UNEXPECTED_STATE; + goto error_nolock; + } + + /* Remove from the clone list */ + lst_remove(&origrespool->clonereslst, respool); + + /* Free resource id */ + result = idgen_freeid(respoollist->idgenhandle, + respool->resid); + if (result != IMG_SUCCESS) + return result; + + /* + * If we created a copy of the resources pvParam then free it + * kfree(NULL) is safe and this check is probably not required. + */ + kfree(respool->resparam); + + /* Free the clone resource structure */ + kfree(respool); + + /* Set resource to be "freed" to the original */ + respool = origrespool; + } + + /* If there are still outstanding references */ + if (!bforce && respool->refcnt != 0) { + /* + * We may need to mark the resource and destroy it when + * there are no outstanding references + */ + return IMG_SUCCESS; + } + + /* Has the resource outstanding references */ + if (respool->refcnt != 0) { + /* Remove the resource from the active list */ + lst_remove(&respoollist->actvreslst, respool); + } else { + /* Remove the resource from the free list */ + lst_remove(&respoollist->freereslst, respool); + } + + /* Free resource id */ + result = idgen_freeid(respoollist->idgenhandle, + respool->resid); + if (result != IMG_SUCCESS) + return result; + + /* Call the resource destructor */ + respool->desfunc(respool->resparam, respool->cb_handle); + kfree(respool); + + return IMG_SUCCESS; + +error_nolock: + return result; +} + +int pool_resalloc(void *poolhndle, void *poolreshndle) +{ + struct poollist *respoollist = poolhndle; + struct poolres *respool = poolreshndle; + unsigned int result = 0; + + if (!poolinitdone || !respoollist || !poolreshndle) { + result = IMG_ERROR_INVALID_PARAMETERS; + goto error_nolock; + } + + /* Lock the pool */ + mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL); + + /* Remove resource from free list */ + lst_remove(&respoollist->freereslst, respool); + + /* Add to active list */ + lst_add(&respoollist->actvreslst, respool); + respool->refcnt++; + + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + + /* Return IMG_SUCCESS */ + return IMG_SUCCESS; + +error_nolock: + return result; +} + +int pool_resfree(void *poolreshndle) +{ + struct poolres *respool = poolreshndle; + struct poollist *respoollist; + struct poolres *origrespool; + unsigned int result = 0; + + if (!poolinitdone || !respool) { + result = IMG_ERROR_INVALID_PARAMETERS; + goto error_nolock; + } + + respoollist = respool->respoollst; + + /* Lock the pool */ + mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL); + + /* If this is a clone */ + if (respool->isclone) { + /* Get access to the original */ + origrespool = respool->origres; + if (!origrespool) { + mutex_unlock(respoollist->pool_mutex); + return IMG_ERROR_INVALID_PARAMETERS; + } + + /* Remove from the clone list */ + lst_remove(&origrespool->clonereslst, respool); + + /* Free resource id */ + result = idgen_freeid(respoollist->idgenhandle, + respool->resid); + if (result != IMG_SUCCESS) { + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + return result; + } + + /* + * If we created a copy of the resources pvParam then free it + * kfree(NULL) is safe and this check is probably not required. + */ + kfree(respool->resparam); + + /* Free the clone resource structure */ + kfree(respool); + + /* Set resource to be "freed" to the original */ + respool = origrespool; + } + + /* Update the reference count */ + respool->refcnt--; + + /* If there are still outstanding references */ + if (respool->refcnt != 0) { + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + /* Return IMG_SUCCESS */ + return IMG_SUCCESS; + } + + /* Remove the resource from the active list */ + lst_remove(&respoollist->actvreslst, respool); + + /* If free callback set */ + if (respoollist->pfnfree) { + /* Add to active list */ + lst_add(&respoollist->actvreslst, respool); + respool->refcnt++; + } else { + /* Add to free list */ + lst_add(&respoollist->freereslst, respool); + } + + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + + /* If free callback set */ + if (respoollist->pfnfree) { + /* Call the free callback */ + respoollist->pfnfree(respool->resid, respool->resparam); + } + + /* Return IMG_SUCCESS */ + return IMG_SUCCESS; + +error_nolock: + return result; +} + +int pool_resclone(void *poolreshndle, void **clonereshndle, void **resparam) +{ + struct poolres *respool = poolreshndle; + struct poollist *respoollist; + struct poolres *origrespool = respool; + struct poolres *clonerespool; + unsigned int result = 0; + + if (!poolinitdone || !respool) { + result = IMG_ERROR_INVALID_PARAMETERS; + goto error_nolock; + } + + /* Allocate a resource structure */ + clonerespool = kzalloc(sizeof(*clonerespool), GFP_KERNEL); + if (!clonerespool) + return IMG_ERROR_OUT_OF_MEMORY; + + respoollist = respool->respoollst; + if (!respoollist) + return IMG_ERROR_FATAL; + + /* Lock the pool */ + mutex_lock_nested(respoollist->pool_mutex, SUBCLASS_POOL); + + /* Set resource id */ + result = idgen_allocid(respoollist->idgenhandle, + (void *)clonerespool, &clonerespool->resid); + if (result != IMG_SUCCESS) + goto error_alloc_id; + + /* If this is a clone, set the original */ + if (respool->isclone) + origrespool = respool->origres; + + /* Setup the cloned resource */ + clonerespool->isclone = 1; + clonerespool->respoollst = respoollist; + clonerespool->origres = origrespool; + + /* Add to clone list */ + lst_add(&origrespool->clonereslst, clonerespool); + origrespool->refcnt++; + + /* If ppvParam is not IMG_NULL */ + if (resparam) { + /* If the size of the original vParam is 0 */ + if (origrespool->resparmsize == 0) { + *resparam = NULL; + } else { + /* Allocate memory for a copy of the original vParam */ + /* + * kmemdup allocates memory of length + * origrespool->resparmsize and to resparam and copy + * origrespool->resparam to resparam of the allocated + * length + */ + *resparam = kmemdup(origrespool->resparam, + origrespool->resparmsize, + GFP_KERNEL); + if (!(*resparam)) { + result = IMG_ERROR_OUT_OF_MEMORY; + goto error_copy_param; + } + } + } + + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + + /* Return the cloned resource */ + *clonereshndle = clonerespool; + + /* Return IMG_SUCCESS */ + return IMG_SUCCESS; + + /* Error handling. */ +error_copy_param: + lst_remove(&origrespool->clonereslst, clonerespool); + origrespool->refcnt--; +error_alloc_id: + kfree(clonerespool); + + /* Unlock the pool */ + mutex_unlock(respoollist->pool_mutex); + +error_nolock: + return result; +} diff --git a/drivers/staging/media/vxd/common/pool_api.h b/drivers/staging/media/vxd/common/pool_api.h new file mode 100644 index 000000000000..1e7803abb715 --- /dev/null +++ b/drivers/staging/media/vxd/common/pool_api.h @@ -0,0 +1,113 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Resource pool manager API. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + */ +#ifndef __POOLAPI_H__ +#define __POOLAPI_H__ + +#include "img_errors.h" +#include "lst.h" + +/* + * This is the prototype for "free" callback functions. This function + * is called when resources are returned to the pools list of free resources. + * NOTE: The "freed" resource is then allocated and passed to the callback + * function. + */ +typedef void (*pfrecalbkpntr)(unsigned int ui32resid, void *resparam); + +/* + * This is the prototype for "destructor" callback functions. This function + * is called when a resource registered with the resource pool manager is to + * be destroyed. + */ +typedef void (*pdestcallbkptr)(void *resparam, void *cb_handle); + +/* + * pool_init - This function is used to initializes the resource pool manager component + * and should be called at start-up. + */ +int pool_init(void); + +/* + * This function is used to deinitialises the resource pool manager component + * and would normally be called at shutdown. + */ +void pool_deinit(void); + +/* + * This function is used to create a resource pool into which resources can be + * placed. + */ +int pool_api_create(void **poolhndle); + +/* + * This function is used to destroy a resource pool. + * NOTE: Destroying a resource pool destroys all of the resources within the + * pool by calling the associated destructor function #POOL_pfnDestructor + * defined when the resource what registered using POOL_ResRegister(). + * + * NOTE: All of the pools resources must be in the pools free list - the + * allocated list must be empty. + */ +int pool_destroy(void *poolhndle); + +/* + * This function is used to set or remove a free callback function on a pool. + * The free callback function gets call for any resources already in the + * pools free list or for any resources that subsequently get freed. + * NOTE: The resource passed to the callback function has been allocated before + * the callback is made. + */ +int pool_setfreecalbck(void *poolhndle, pfrecalbkpntr pfnfree); + +/* + * This function is used to register a resource within a resource pool. The + * resource is added to the pools allocated or free list based on the value + * of bAlloc. + */ +int pool_resreg(void *poolhndle, pdestcallbkptr fndestructor, + void *resparam, unsigned int resparamsize, + int balloc, unsigned int *residptr, + void **poolreshndle, void *cb_handle); + +/* + * This function is used to destroy a resource. + */ +int pool_resdestroy(void *poolreshndle, int bforce); + +/* + * This function is used to get/allocate a resource from a pool. This moves + * the resource from the free to allocated list. + */ +int pool_resalloc(void *poolhndle, void *poolreshndle); + +/* + * This function is used to free a resource and return it to the pools lists of + * free resources. + * NOTE: The resources is only moved to the free list when all references to + * the resource have been freed. + */ +int pool_resfree(void *poolreshndle); + +/* + * This function is used to clone a resource - this creates an additional + * reference to the resource. + * NOTE: The resources is only moved to the free list when all references to + * the resource have been freed. + * NOTE: If this function is used to clone the resource's pvParam data then + * the clone of the data is freed when the clone of the resource is freed. + * The resource destructor is NOT used for this - simply an IMG_FREE. + */ +int pool_resclone(void *poolreshndle, void **clonereshndle, void **resparam); + +#endif /* __POOLAPI_H__ */ From patchwork Wed Aug 18 14:10:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDEF7C4320A for ; Wed, 18 Aug 2021 14:14:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D4C1F610A3 for ; Wed, 18 Aug 2021 14:14:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239438AbhHROPO (ORCPT ); Wed, 18 Aug 2021 10:15:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239619AbhHROON (ORCPT ); Wed, 18 Aug 2021 10:14:13 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A614C061224 for ; Wed, 18 Aug 2021 07:12:51 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id u13-20020a17090abb0db0290177e1d9b3f7so9010381pjr.1 for ; Wed, 18 Aug 2021 07:12:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hciLVVz7oYmPEz+0OVi+40tuZkd3zXhAHIFMBcV5fuQ=; b=CTgZvPMPrtzQSE3z+Tt/9htreF/68IeCOF2+whRVF6fO4LGY99ryZJadAfp2wMKbJ0 abccfykvqGzHuOmuXeMpqzxnnsG5OzRrKd+aV2NkGJz1KE+g/7bK9GA+Qo1npkCVn5e6 u19V4CrMy890EdRSRYqvPMpGy0Islw/sISe3z024rMftW93MmRO5Ngxbu58ouZCLkDQs 7eAuQIesoMRh441BqSpre4NuzY0fAeqUFMb0fvHm85iHMqHb+cOKL0AT4Jd9deSaDurs De8EgHdUKDjdjMgRaQr+6hckwhME3U0BfVOYJeLRaM3IVt1d0CYhl4Y4SFHjg5LCIx2I Kdaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=hciLVVz7oYmPEz+0OVi+40tuZkd3zXhAHIFMBcV5fuQ=; b=ZqlkVRmG3u9T9mX1tlxR0cyk5cy9GKAvbLI4UIyylSXQ6DqgLebKXoKyTkz8PWhpao DACrNNvzAOp2zwizwy5pYAFDlVpbsMLkdwG3+JysJ2v9DavjQ6vqDJUhKKDEbDtjg/lZ htwWtTZ+YRg4CF3ZiRZsXA7QrVMnexysaNS3Jn3UvjbtJQudY4c/tAT4+zjAoLqCMDr1 P9EAzudz7xdm/3KeAuq7NpSCbHRGQWfoxkLIMIfZhctJclXHz7hOxMdjtDKFvZCJxHJn 0sYl5Mvom+j4iga45APWYrdn3NRGBou5fjlqTX5fAHQoO/QndriBcbvDAtd+bVwDAkZ2 G4WA== MIME-Version: 1.0 X-Gm-Message-State: AOAM531Yyiioa5CqLA+d6QIi3sbI/en5U0/yxKy7ZnQi3lkTLMCKAxJH 7Su/GqPQPY1MUwmqVLJDpQi4UsPrE6MS1xbGHHr8NF3eFc2Y3JvAD6PtnNBIKzfwjBNyWKsd7wb LEWnWJdrzqkmYk1oC X-Google-Smtp-Source: ABdhPJzIajwSqAJ/xLWgynJls+uwaNrEilp9qnynPFj+e/RNJO4L1NHC37WYsSlJAcaePOxqx7vDlQ== X-Received: by 2002:a17:903:2349:b0:12d:ada3:192e with SMTP id c9-20020a170903234900b0012dada3192emr7594388plh.3.1629295970735; Wed, 18 Aug 2021 07:12:50 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.12.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:12:50 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 17/30] v4l: vxd-dec: This patch implements resource manage component Date: Wed, 18 Aug 2021 19:40:24 +0530 Message-Id: <20210818141037.19990-18-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya This component is used to track decoder resources, and share them across other components. Signed-off-by: Sunita Nadampalli Signed-off-by: Sidraya --- MAINTAINERS | 2 + drivers/staging/media/vxd/common/rman_api.c | 620 ++++++++++++++++++++ drivers/staging/media/vxd/common/rman_api.h | 66 +++ 3 files changed, 688 insertions(+) create mode 100644 drivers/staging/media/vxd/common/rman_api.c create mode 100644 drivers/staging/media/vxd/common/rman_api.h diff --git a/MAINTAINERS b/MAINTAINERS index f7e55791f355..d126162984c6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19560,6 +19560,8 @@ F: drivers/staging/media/vxd/common/pool_api.c F: drivers/staging/media/vxd/common/pool_api.h F: drivers/staging/media/vxd/common/ra.c F: drivers/staging/media/vxd/common/ra.h +F: drivers/staging/media/vxd/common/rman_api.c +F: drivers/staging/media/vxd/common/rman_api.h F: drivers/staging/media/vxd/common/talmmu_api.c F: drivers/staging/media/vxd/common/talmmu_api.h F: drivers/staging/media/vxd/common/work_queue.c diff --git a/drivers/staging/media/vxd/common/rman_api.c b/drivers/staging/media/vxd/common/rman_api.c new file mode 100644 index 000000000000..c595dccd5ed2 --- /dev/null +++ b/drivers/staging/media/vxd/common/rman_api.c @@ -0,0 +1,620 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * This component is used to track decoder resources, + * and share them across other components. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + */ +#include +#include +#include +#include +#include +#include +#include +#include + +#include "dq.h" +#include "idgen_api.h" +#include "rman_api.h" + +/* + * The following macros are used to build/decompose the composite resource Id + * made up from the bucket index + 1 and the allocated resource Id. + */ +#define RMAN_CRESID_BUCKET_INDEX_BITS (8) +#define RMAN_CRESID_RES_ID_BITS (32 - RMAN_CRESID_BUCKET_INDEX_BITS) +#define RMAN_CRESID_MAX_RES_ID ((1 << RMAN_CRESID_RES_ID_BITS) - 1) +#define RMAN_CRESID_RES_ID_MASK (RMAN_CRESID_MAX_RES_ID) +#define RMAN_CRESID_BUCKET_SHIFT (RMAN_CRESID_RES_ID_BITS) +#define RMAN_CRESID_MAX_BUCKET_INDEX \ + ((1 << RMAN_CRESID_BUCKET_INDEX_BITS) - 1) + +#define RMAN_MAX_ID 4096 +#define RMAN_ID_BLOCKSIZE 256 + +/* global state variable */ +static unsigned char inited; +static struct rman_bucket *bucket_array[RMAN_CRESID_MAX_BUCKET_INDEX] = {0}; +static struct rman_bucket *global_res_bucket; +static struct rman_bucket *shared_res_bucket; +static struct mutex *shared_res_mutex_handle; +static struct mutex *global_mutex; + +/* + * This structure contains the bucket information. + */ +struct rman_bucket { + void **link; /* to be part of single linked list */ + struct dq_linkage_t res_list; + unsigned int bucket_idx; + void *id_gen; + unsigned int res_cnt; +}; + +/* + * This structure contains the resource details for a resource registered with + * the resource manager. + */ +struct rman_res { + struct dq_linkage_t link; /* to be part of double linked list */ + struct rman_bucket *bucket; + unsigned int type_id; + rman_fn_free fn_free; + void *param; + unsigned int res_id; + struct mutex *mutex_handle; /*resource mutex */ + unsigned char *res_name; + struct rman_res *shared_res; + unsigned int ref_cnt; +}; + +/* + * initialization + */ +int rman_initialise(void) +{ + unsigned int ret; + + if (!inited) { + shared_res_mutex_handle = kzalloc(sizeof(*shared_res_mutex_handle), GFP_KERNEL); + if (!shared_res_mutex_handle) + return IMG_ERROR_OUT_OF_MEMORY; + + mutex_init(shared_res_mutex_handle); + + /* Set initialised flag */ + inited = TRUE; + + /* Create the global resource bucket */ + ret = rman_create_bucket((void **)&global_res_bucket); + IMG_DBG_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + return ret; + + /* Create the shared resource bucket */ + ret = rman_create_bucket((void **)&shared_res_bucket); + IMG_DBG_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + return ret; + + global_mutex = kzalloc(sizeof(*global_mutex), GFP_KERNEL); + if (!global_mutex) + return IMG_ERROR_OUT_OF_MEMORY; + + mutex_init(global_mutex); + } + return IMG_SUCCESS; +} + +/* + * deinitialization + */ +void rman_deinitialise(void) +{ + unsigned int i; + + if (inited) { + /* Destroy the golbal resource bucket */ + rman_destroy_bucket(global_res_bucket); + + /* Destroy the shared resource bucket */ + rman_destroy_bucket(shared_res_bucket); + + /* Make sure we destroy the mutex after destroying the bucket */ + mutex_destroy(global_mutex); + kfree(global_mutex); + global_mutex = NULL; + + /* Destroy mutex */ + mutex_destroy(shared_res_mutex_handle); + kfree(shared_res_mutex_handle); + shared_res_mutex_handle = NULL; + + /* Check all buckets destroyed */ + for (i = 0; i < RMAN_CRESID_MAX_BUCKET_INDEX; i++) + IMG_DBG_ASSERT(!bucket_array[i]); + + /* Reset initialised flag */ + inited = FALSE; + } +} + +int rman_create_bucket(void **res_bucket_handle) +{ + struct rman_bucket *bucket; + unsigned int i; + int ret; + + IMG_DBG_ASSERT(inited); + + /* Allocate a bucket structure */ + bucket = kzalloc(sizeof(*bucket), GFP_KERNEL); + IMG_DBG_ASSERT(bucket); + if (!bucket) + return IMG_ERROR_OUT_OF_MEMORY; + + /* Initialise the resource list */ + dq_init(&bucket->res_list); + + /* Then start allocating resource ids at the first */ + ret = idgen_createcontext(RMAN_MAX_ID, RMAN_ID_BLOCKSIZE, FALSE, + &bucket->id_gen); + if (ret != IMG_SUCCESS) { + kfree(bucket); + IMG_DBG_ASSERT("failed to create IDGEN context" == NULL); + return ret; + } + + /* Locate free bucket index within the table */ + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN); + for (i = 0; i < RMAN_CRESID_MAX_BUCKET_INDEX; i++) { + if (!bucket_array[i]) + break; + } + if (i >= RMAN_CRESID_MAX_BUCKET_INDEX) { + mutex_unlock(shared_res_mutex_handle); + idgen_destroycontext(bucket->id_gen); + kfree(bucket); + IMG_DBG_ASSERT("No free buckets left" == NULL); + return IMG_ERROR_GENERIC_FAILURE; + } + + /* Allocate bucket index */ + bucket->bucket_idx = i; + bucket_array[i] = bucket; + + mutex_unlock(shared_res_mutex_handle); + + /* Return the bucket handle */ + *res_bucket_handle = bucket; + + return IMG_SUCCESS; +} + +void rman_destroy_bucket(void *res_bucket_handle) +{ + struct rman_bucket *bucket = (struct rman_bucket *)res_bucket_handle; + + IMG_DBG_ASSERT(inited); + + IMG_DBG_ASSERT(bucket); + if (!bucket) + return; + + IMG_DBG_ASSERT(bucket->bucket_idx < RMAN_CRESID_MAX_BUCKET_INDEX); + IMG_DBG_ASSERT(bucket_array[bucket->bucket_idx]); + + /* Free all resources from the bucket */ + rman_free_resources(res_bucket_handle, RMAN_TYPE_P1); + rman_free_resources(res_bucket_handle, RMAN_TYPE_P2); + rman_free_resources(res_bucket_handle, RMAN_TYPE_P3); + rman_free_resources(res_bucket_handle, RMAN_ALL_TYPES); + + /* free sticky resources last: other resources are dependent on them */ + rman_free_resources(res_bucket_handle, RMAN_STICKY); + /* Use proper locking around global buckets. */ + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN); + + /* Free from array of bucket pointers */ + bucket_array[bucket->bucket_idx] = NULL; + + mutex_unlock(shared_res_mutex_handle); + + /* Free the bucket itself */ + idgen_destroycontext(bucket->id_gen); + kfree(bucket); +} + +void *rman_get_global_bucket(void) +{ + IMG_DBG_ASSERT(inited); + IMG_DBG_ASSERT(global_res_bucket); + + /* Return the handle of the global resource bucket */ + return global_res_bucket; +} + +int rman_register_resource(void *res_bucket_handle, unsigned int type_id, + rman_fn_free fnfree, void *param, + void **res_handle, unsigned int *res_id) +{ + struct rman_bucket *bucket = (struct rman_bucket *)res_bucket_handle; + struct rman_res *res; + int ret; + + IMG_DBG_ASSERT(inited); + IMG_DBG_ASSERT(type_id != RMAN_ALL_TYPES); + + IMG_DBG_ASSERT(res_bucket_handle); + if (!res_bucket_handle) + return IMG_ERROR_GENERIC_FAILURE; + + /* Allocate a resource structure */ + res = kzalloc(sizeof(*res), GFP_KERNEL); + IMG_DBG_ASSERT(res); + if (!res) + return IMG_ERROR_OUT_OF_MEMORY; + + /* Fill in the resource structure */ + res->bucket = bucket; + res->type_id = type_id; + res->fn_free = fnfree; + res->param = param; + + /* Allocate resource Id */ + mutex_lock_nested(global_mutex, SUBCLASS_RMAN); + ret = idgen_allocid(bucket->id_gen, res, &res->res_id); + mutex_unlock(global_mutex); + if (ret != IMG_SUCCESS) { + IMG_DBG_ASSERT("failed to allocate RMAN id" == NULL); + return ret; + } + IMG_DBG_ASSERT(res->res_id <= RMAN_CRESID_MAX_RES_ID); + + /* add this resource to the bucket */ + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN); + dq_addtail(&bucket->res_list, res); + + /* Update count of resources */ + bucket->res_cnt++; + mutex_unlock(shared_res_mutex_handle); + + /* If resource handle required */ + if (res_handle) + *res_handle = res; + + /* If resource id required */ + if (res_id) + *res_id = rman_get_resource_id(res); + + return IMG_SUCCESS; +} + +unsigned int rman_get_resource_id(void *res_handle) +{ + struct rman_res *res = res_handle; + unsigned int ext_res_id; + + IMG_DBG_ASSERT(res_handle); + if (!res_handle) + return 0; + + IMG_DBG_ASSERT(res->res_id <= RMAN_CRESID_MAX_RES_ID); + IMG_DBG_ASSERT(res->bucket->bucket_idx < RMAN_CRESID_MAX_BUCKET_INDEX); + if (res->bucket->bucket_idx >= RMAN_CRESID_MAX_BUCKET_INDEX) + return 0; + + ext_res_id = (((res->bucket->bucket_idx + 1) << + RMAN_CRESID_BUCKET_SHIFT) | res->res_id); + + return ext_res_id; +} + +static void *rman_getresource_int(void *res_bucket_handle, unsigned int res_id, + unsigned int type_id, void **res_handle) +{ + struct rman_bucket *bucket = (struct rman_bucket *)res_bucket_handle; + struct rman_res *res; + int ret; + + IMG_DBG_ASSERT(res_id <= RMAN_CRESID_MAX_RES_ID); + + /* Loop over the resources in this bucket till we find the required id */ + mutex_lock_nested(global_mutex, SUBCLASS_RMAN); + ret = idgen_gethandle(bucket->id_gen, res_id, (void **)&res); + mutex_unlock(global_mutex); + if (ret != IMG_SUCCESS) { + IMG_DBG_ASSERT("failed to get RMAN resource" == NULL); + return NULL; + } + + /* If the resource handle is required */ + if (res_handle) + *res_handle = res; /* Return it */ + + /* If the resource was not found */ + IMG_DBG_ASSERT(res); + IMG_DBG_ASSERT((void *)res != &bucket->res_list); + if (!res || ((void *)res == &bucket->res_list)) + return NULL; + + /* Cross check the type */ + IMG_DBG_ASSERT(type_id == res->type_id); + + /* Return the resource. */ + return res->param; +} + +int rman_get_resource(unsigned int res_id, unsigned int type_id, void **param, + void **res_handle) +{ + unsigned int bucket_idx = (res_id >> RMAN_CRESID_BUCKET_SHIFT) - 1; + unsigned int int_res_id = (res_id & RMAN_CRESID_RES_ID_MASK); + void *local_param; + + IMG_DBG_ASSERT(bucket_idx < RMAN_CRESID_MAX_BUCKET_INDEX); + if (bucket_idx >= RMAN_CRESID_MAX_BUCKET_INDEX) + return IMG_ERROR_INVALID_ID; /* Happens when bucket_idx == 0 */ + + IMG_DBG_ASSERT(bucket_array[bucket_idx]); + if (!bucket_array[bucket_idx]) + return IMG_ERROR_INVALID_ID; + + local_param = rman_getresource_int(bucket_array[bucket_idx], + int_res_id, type_id, + res_handle); + + /* If we didn't find the resource */ + if (!local_param) + return IMG_ERROR_INVALID_ID; + + /* Return the resource */ + if (param) + *param = local_param; + + return IMG_SUCCESS; +} + +int rman_get_named_resource(unsigned char *res_name, rman_fn_alloc fn_alloc, + void *alloc_info, void *res_bucket_handle, + unsigned int type_id, rman_fn_free fn_free, + void **param, void **res_handle, unsigned int *res_id) +{ + struct rman_bucket *bucket = res_bucket_handle; + struct rman_res *res; + unsigned int ret; + void *local_param; + unsigned char found = FALSE; + + IMG_DBG_ASSERT(inited); + + IMG_DBG_ASSERT(res_bucket_handle); + if (!res_bucket_handle) + return IMG_ERROR_GENERIC_FAILURE; + + /* Lock the shared resources */ + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN); + res = (struct rman_res *)dq_first(&bucket->res_list); + while (res && ((void *)res != &bucket->res_list)) { + /* If resource already in the shared list */ + if (res->res_name && (strcmp(res_name, + res->res_name) == 0)) { + IMG_DBG_ASSERT(res->fn_free == fn_free); + found = TRUE; + break; + } + + /* Move to next resource */ + res = (struct rman_res *)dq_next(res); + } + mutex_unlock(shared_res_mutex_handle); + + /* If the named resource was not found */ + if (!found) { + /* Allocate the resource */ + ret = fn_alloc(alloc_info, &local_param); + IMG_DBG_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + return ret; + + /* Register the named resource */ + ret = rman_register_resource(res_bucket_handle, type_id, + fn_free, local_param, + (void **)&res, NULL); + IMG_DBG_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + return ret; + + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN); + res->res_name = res_name; + mutex_unlock(shared_res_mutex_handle); + } + + /* Return the pvParam value */ + *param = res->param; + + /* If resource handle required */ + if (res_handle) + *res_handle = res; + + /* If resource id required */ + if (res_id) + *res_id = rman_get_resource_id(res); + + /* Exit */ + return IMG_SUCCESS; +} + +static void rman_free_resource_int(struct rman_res *res) +{ + struct rman_bucket *bucket = res->bucket; + + /* Remove the resource from the active list */ + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN); + + /* Remove from list */ + dq_remove(res); + + /* Update count of resources */ + bucket->res_cnt--; + + mutex_unlock(shared_res_mutex_handle); + + /* If mutex associated with the resource */ + if (res->mutex_handle) { + /* Destroy mutex */ + mutex_destroy(res->mutex_handle); + kfree(res->mutex_handle); + res->mutex_handle = NULL; + } + + /* If this resource is not already shared */ + if (res->shared_res) { + /* Lock the shared resources */ + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN); + + /* Update the reference count */ + IMG_DBG_ASSERT(res->shared_res->ref_cnt != 0); + res->shared_res->ref_cnt--; + + /* If this is the last free for the shared resource */ + if (res->shared_res->ref_cnt == 0) + /* Free the shared resource */ + rman_free_resource_int(res->shared_res); + + /* UnLock the shared resources */ + mutex_unlock(shared_res_mutex_handle); + } else { + /* If there is a free callback function. */ + if (res->fn_free) + /* Call resource free callback */ + res->fn_free(res->param); + } + + /* If the resource has a name then free it */ + kfree(res->res_name); + + /* Free the resource ID. */ + mutex_lock_nested(global_mutex, SUBCLASS_RMAN); + idgen_freeid(bucket->id_gen, res->res_id); + mutex_unlock(global_mutex); + + /* Free a resource structure */ + kfree(res); +} + +void rman_free_resource(void *res_handle) +{ + struct rman_res *res; + + IMG_DBG_ASSERT(inited); + + IMG_DBG_ASSERT(res_handle); + if (!res_handle) + return; + + /* Get access to the resource structure */ + res = (struct rman_res *)res_handle; + + /* Free resource */ + rman_free_resource_int(res); +} + +void rman_lock_resource(void *res_handle) +{ + struct rman_res *res; + + IMG_DBG_ASSERT(inited); + + IMG_DBG_ASSERT(res_handle); + if (!res_handle) + return; + + /* Get access to the resource structure */ + res = (struct rman_res *)res_handle; + + /* If this is a shared resource */ + if (res->shared_res) + /* We need to lock/unlock the underlying shared resource */ + res = res->shared_res; + + /* If no mutex associated with this resource */ + if (!res->mutex_handle) { + /* Create one */ + + res->mutex_handle = kzalloc(sizeof(*res->mutex_handle), GFP_KERNEL); + if (!res->mutex_handle) + return; + + mutex_init(res->mutex_handle); + } + + /* lock it */ + mutex_lock(res->mutex_handle); +} + +void rman_unlock_resource(void *res_handle) +{ + struct rman_res *res; + + IMG_DBG_ASSERT(inited); + + IMG_DBG_ASSERT(res_handle); + if (!res_handle) + return; + + /* Get access to the resource structure */ + res = (struct rman_res *)res_handle; + + /* If this is a shared resource */ + if (res->shared_res) + /* We need to lock/unlock the underlying shared resource */ + res = res->shared_res; + + IMG_DBG_ASSERT(res->mutex_handle); + + /* Unlock mutex */ + mutex_unlock(res->mutex_handle); +} + +void rman_free_resources(void *res_bucket_handle, unsigned int type_id) +{ + struct rman_bucket *bucket = (struct rman_bucket *)res_bucket_handle; + struct rman_res *res; + + IMG_DBG_ASSERT(inited); + + IMG_DBG_ASSERT(res_bucket_handle); + if (!res_bucket_handle) + return; + + /* Scan the active list looking for the resources to be freed */ + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN); + res = (struct rman_res *)dq_first(&bucket->res_list); + while ((res) && ((void *)res != &bucket->res_list)) { + /* If this is resource is to be removed */ + if ((type_id == RMAN_ALL_TYPES && + res->type_id != RMAN_STICKY) || + res->type_id == type_id) { + /* Yes, remove it, Free current resource */ + mutex_unlock(shared_res_mutex_handle); + rman_free_resource_int(res); + mutex_lock_nested(shared_res_mutex_handle, SUBCLASS_RMAN); + + /* Restart from the beginning of the list */ + res = (struct rman_res *)dq_first(&bucket->res_list); + } else { + /* Move to next resource */ + res = (struct rman_res *)lst_next(res); + } + } + mutex_unlock(shared_res_mutex_handle); +} diff --git a/drivers/staging/media/vxd/common/rman_api.h b/drivers/staging/media/vxd/common/rman_api.h new file mode 100644 index 000000000000..baadc7f22eff --- /dev/null +++ b/drivers/staging/media/vxd/common/rman_api.h @@ -0,0 +1,66 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * This component is used to track decoder resources, + * and share them across other components. + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Amit Makani + * + * Re-written for upstreamimg + * Sidraya Jayagond + */ + +#ifndef __RMAN_API_H__ +#define __RMAN_API_H__ + +#include + +#include "img_errors.h" +#include "lst.h" + +#define RMAN_ALL_TYPES (0xFFFFFFFF) +#define RMAN_TYPE_P1 (0xFFFFFFFE) +#define RMAN_TYPE_P2 (0xFFFFFFFE) +#define RMAN_TYPE_P3 (0xFFFFFFFE) +#define RMAN_STICKY (0xFFFFFFFD) + +int rman_initialise(void); + +void rman_deinitialise(void); + +int rman_create_bucket(void **res_handle); + +void rman_destroy_bucket(void *res_handle); + +void *rman_get_global_bucket(void); + +typedef void (*rman_fn_free) (void *param); + +int rman_register_resource(void *res_handle, unsigned int type_id, rman_fn_free fn_free, + void *param, void **res_handle_ptr, + unsigned int *res_id); + +typedef int (*rman_fn_alloc) (void *alloc_info, void **param); + +int rman_get_named_resource(unsigned char *res_name, rman_fn_alloc fn_alloc, + void *alloc_info, void *res_bucket_handle, + unsigned int type_id, rman_fn_free fn_free, + void **param, void **res_handle, unsigned int *res_id); + +unsigned int rman_get_resource_id(void *res_handle); + +int rman_get_resource(unsigned int res_id, unsigned int type_id, void **param, + void **res_handle); + +void rman_free_resource(void *res_handle); + +void rman_lock_resource(void *res_handle); + +void rman_unlock_resource(void *res_hanle); + +void rman_free_resources(void *res_bucket_handle, unsigned int type_id); + +#endif From patchwork Wed Aug 18 14:10:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD47CC4338F for ; Wed, 18 Aug 2021 14:16:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B40FF60EBD for ; Wed, 18 Aug 2021 14:16:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238743AbhHROQc (ORCPT ); Wed, 18 Aug 2021 10:16:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239054AbhHROON (ORCPT ); Wed, 18 Aug 2021 10:14:13 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6854C0611C1 for ; Wed, 18 Aug 2021 07:12:54 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id o10so1945933plg.0 for ; Wed, 18 Aug 2021 07:12:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N4maNljv8YqN7CqHYNSQ8TkywzC8SpDjoi41ZOBU8Gs=; b=mJLVEmTwWjZKxKwxC/1+1/cz79oV6f1H/AzNVM7nzofn8DEaXBLtB3pT4yourK4EMD HW6um9LudCqPd4sT2yz9GmhYUdnoehqclSNmrTwNgyh6G9kMW8Z8qU4Wah3NsFocxP1s 3CZYA2KLl3NViWqIzhug6IF8R6Lynfv9BWjDVnFXLPcwnw+SAU7qMoJ5ul6k0rR+s4rr 4Xra+aivYBO4MHA3wt+uW7UIY5auM+w8sLLqsWdGMEPukF71g+AOk4Gl+1N/+QH+78Rv 7yo1tjh3RswQMKVBO4D7/KBDNLpnC81KpH/HHkV0W2R0+8FRKln8eezMXvNYk1+Jkva0 IKMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=N4maNljv8YqN7CqHYNSQ8TkywzC8SpDjoi41ZOBU8Gs=; b=TrkyKzOGqZVIXRuSB1sIDwwUwyo4+gyT9Npv16N40clKtS5LWGid8uoFo1smeZ42pY 7qexHv4sjkR9E+eG7+gsBbtYdjW4w9F6fZdL7nWp07W9PnW4sHyhC11A3U/776P04iFw hdaH/048dbg/fUAuHYFIyqGsPDN4sbDWJbCgnaQhT3blDO1/KC4oREZK8zKlTIpn7ZL5 wEaikJR/D/kB6cj6x+LD6Gx8cnJiJvVrA2i4O/z2yCmDRxboSdaDWQtdPLpfWyvAMyWb XWPy0WiJ3jh2F6laIqnbXyl5SumttQZ+3jADGO6oRCzsNN0dGFWTUHDm3hpqvl/J3DAF 3G0A== MIME-Version: 1.0 X-Gm-Message-State: AOAM531KrdjK7Uqr639rXq8hActRQLL2CX6qmDWBbDaTepkmp1b0j5Jm OtAPj2AAQzrco2q6exVe76C2UzBYv6sHo7oV4a4kMmkIEpeIBjOLhKmolBb66r68JlLRItiwfbI 7/N0Jso2Cve8gS3Oy X-Google-Smtp-Source: ABdhPJy5ldwSV6EuZsLiWVpUS7n/5Rh0xCwiwYj3/gD/HWKg/Kx7Gh4IblwV3ytp024SN0hLTyhM+w== X-Received: by 2002:a17:90a:7881:: with SMTP id x1mr9746437pjk.102.1629295974278; Wed, 18 Aug 2021 07:12:54 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.12.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:12:53 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 18/30] v4l: vxd-dec: This patch implements pixel processing library Date: Wed, 18 Aug 2021 19:40:25 +0530 Message-Id: <20210818141037.19990-19-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya This library is used to handle different pixel format layouts. Signed-off-by: Sunita Nadampalli Signed-off-by: Sidraya --- MAINTAINERS | 2 + drivers/staging/media/vxd/decoder/pixel_api.c | 895 ++++++++++++++++++ drivers/staging/media/vxd/decoder/pixel_api.h | 152 +++ 3 files changed, 1049 insertions(+) create mode 100644 drivers/staging/media/vxd/decoder/pixel_api.c create mode 100644 drivers/staging/media/vxd/decoder/pixel_api.h diff --git a/MAINTAINERS b/MAINTAINERS index d126162984c6..bf47d48a1ec2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19600,6 +19600,8 @@ F: drivers/staging/media/vxd/decoder/jpegfw_data.h F: drivers/staging/media/vxd/decoder/jpegfw_data_shared.h F: drivers/staging/media/vxd/decoder/mem_io.h F: drivers/staging/media/vxd/decoder/mmu_defs.h +F: drivers/staging/media/vxd/decoder/pixel_api.c +F: drivers/staging/media/vxd/decoder/pixel_api.h F: drivers/staging/media/vxd/decoder/pvdec_entropy_regs.h F: drivers/staging/media/vxd/decoder/pvdec_int.h F: drivers/staging/media/vxd/decoder/pvdec_vec_be_regs.h diff --git a/drivers/staging/media/vxd/decoder/pixel_api.c b/drivers/staging/media/vxd/decoder/pixel_api.c new file mode 100644 index 000000000000..a0620662a68e --- /dev/null +++ b/drivers/staging/media/vxd/decoder/pixel_api.c @@ -0,0 +1,895 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Pixel processing function implementations + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Sunita Nadampalli + * + * Re-written for upstream + * Sidraya Jayagond + */ + +#include +#include +#include +#include + +#include "img_errors.h" +#include "img_pixfmts.h" +#include "pixel_api.h" +#include "vdec_defs.h" + +#define NUM_OF_FORMATS 17 +#define PIXNAME(x) /* Pixel name support not enabled */ +#define FACT_SPEC_FORMAT_NUM_PLANES 4 +#define FACT_SPEC_FORMAT_PLANE_UNUSED 0xf +#define FACT_SPEC_FORMAT_PLANE_CODE_BITS 4 +#define FACT_SPEC_FORMAT_PLANE_CODE_MASK 3 +#define FACT_SPEC_FORMAT_MIN_FACT_VAL 1 + +/* + * @brief Pointer to the default format in the asPixelFormats array + * default format is an invalid format + * @note pointer set by initSearch() + * This pointer is also used to know if the arrays were sorted + */ +static struct pixel_pixinfo *def_fmt; + +/* + * @brief Actual array storing the pixel formats information. + */ +static struct pixel_pixinfo pix_fmts[NUM_OF_FORMATS] = { + { + IMG_PIXFMT_420PL12YUV8, + PIXEL_UV_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT8_MP, + PIXEL_FORMAT_420, + 8, + 8, + 2 + }, + + { + IMG_PIXFMT_420PL12YVU8, + PIXEL_VU_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT8_MP, + PIXEL_FORMAT_420, + 8, + 8, + 2 + }, + + { + IMG_PIXFMT_420PL12YUV10, + PIXEL_UV_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_MP, + PIXEL_FORMAT_420, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_420PL12YVU10, + PIXEL_VU_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_MP, + PIXEL_FORMAT_420, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_420PL12YUV10_MSB, + PIXEL_UV_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_MSB_MP, + PIXEL_FORMAT_420, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_420PL12YVU10_MSB, + PIXEL_VU_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_MSB_MP, + PIXEL_FORMAT_420, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_420PL12YUV10_LSB, + PIXEL_UV_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_LSB_MP, + PIXEL_FORMAT_420, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_420PL12YVU10_LSB, + PIXEL_VU_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_LSB_MP, + PIXEL_FORMAT_420, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_422PL12YUV8, + PIXEL_UV_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT8_MP, + PIXEL_FORMAT_422, + 8, + 8, + 2 + }, + + { + IMG_PIXFMT_422PL12YVU8, + PIXEL_VU_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT8_MP, + PIXEL_FORMAT_422, + 8, + 8, + 2 + }, + + { + IMG_PIXFMT_422PL12YUV10, + PIXEL_UV_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_MP, + PIXEL_FORMAT_422, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_422PL12YVU10, + PIXEL_VU_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_MP, + PIXEL_FORMAT_422, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_422PL12YUV10_MSB, + PIXEL_UV_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_MSB_MP, + PIXEL_FORMAT_422, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_422PL12YVU10_MSB, + PIXEL_VU_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_MSB_MP, + PIXEL_FORMAT_422, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_422PL12YUV10_LSB, + PIXEL_UV_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_LSB_MP, + PIXEL_FORMAT_422, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_422PL12YVU10_LSB, + PIXEL_VU_ORDER, + PIXEL_MULTICHROME, + PIXEL_BIT10_LSB_MP, + PIXEL_FORMAT_422, + 10, + 10, + 2 + }, + + { + IMG_PIXFMT_UNDEFINED, + PIXEL_INVALID_CI, + 0, + (enum pixel_mem_packing)0, + PIXEL_FORMAT_INVALID, + 0, + 0, + 0 + } +}; + +static struct pixel_pixinfo_table pixinfo_table[] = { + { + IMG_PIXFMT_420PL12YUV8_A8, + { + PIXNAME(IMG_PIXFMT_420PL12YUV8_A8) + 16, + 16, + 16, + 0, + 16, + TRUE, + TRUE, + 4, + TRUE + } + }, + + { + IMG_PIXFMT_422PL12YUV8_A8, + { + PIXNAME(IMG_PIXFMT_422PL12YUV8_A8) + 16, + 16, + 16, + 0, + 16, + TRUE, + FALSE, + 4, + TRUE + } + }, + + { + IMG_PIXFMT_420PL12YUV8, + { + PIXNAME(IMG_PIXFMT_420PL12YUV8) + 16, + 16, + 16, + 0, + 0, + TRUE, + TRUE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_420PL12YVU8, + { + PIXNAME(IMG_PIXFMT_420PL12YVU8) + 16, + 16, + 16, + 0, + 0, + TRUE, + TRUE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_420PL12YUV10, + { + PIXNAME(IMG_PIXFMT_420PL12YUV10) + 12, + 16, + 16, + 0, + 0, + TRUE, + TRUE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_420PL12YVU10, + { + PIXNAME(IMG_PIXFMT_420PL12YVU10) + 12, + 16, + 16, + 0, + 0, + TRUE, + TRUE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_420PL12YUV10_MSB, + { + PIXNAME(IMG_PIXFMT_420PL12YUV10_MSB) + 8, + 16, + 16, + 0, + 0, + TRUE, + TRUE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_420PL12YVU10_MSB, + { + PIXNAME(IMG_PIXFMT_420PL12YVU10_MSB) + 8, + 16, + 16, + 0, + 0, + TRUE, + TRUE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_422PL12YUV8, + { + PIXNAME(IMG_PIXFMT_422PL12YUV8) + 16, + 16, + 16, + 0, + 0, + TRUE, + FALSE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_422PL12YVU8, + { + PIXNAME(IMG_PIXFMT_422PL12YVU8) + 16, + 16, + 16, + 0, + 0, + TRUE, + FALSE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_422PL12YUV10, + { + PIXNAME(IMG_PIXFMT_422PL12YUV10) + 12, + 16, + 16, + 0, + 0, + TRUE, + FALSE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_422PL12YVU10, + { + PIXNAME(IMG_PIXFMT_422PL12YVU10) + 12, + 16, + 16, + 0, + 0, + TRUE, + FALSE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_422PL12YUV10_MSB, + { + PIXNAME(IMG_PIXFMT_422PL12YUV10_MSB) + 8, + 16, + 16, + 0, + 0, + TRUE, + FALSE, + 4, + FALSE + } + }, + + { + IMG_PIXFMT_422PL12YVU10_MSB, + { + PIXNAME(IMG_PIXFMT_422PL12YVU10_MSB) + 8, + 16, + 16, + 0, + 0, + TRUE, + FALSE, + 4, + FALSE + } + }, +}; + +static struct pixel_pixinfo_table* +pixel_get_pixelinfo_from_pixfmt(enum img_pixfmt pix_fmt) +{ + unsigned int i; + unsigned char found = FALSE; + struct pixel_pixinfo_table *this_pixinfo_table_entry = NULL; + + for (i = 0; + i < (sizeof(pixinfo_table) / sizeof(struct pixel_pixinfo_table)); + i++) { + if (pix_fmt == pixinfo_table[i].pix_color_fmt) { + /* + * There must only be one entry per pixel colour format + * in the table + */ + VDEC_ASSERT(!found); + found = TRUE; + this_pixinfo_table_entry = &pixinfo_table[i]; + + /* + * We deliberately do NOT break here - scan rest of + * table to ensure there are not duplicate entries + */ + } + } + return this_pixinfo_table_entry; +} + +/* + * @brief Array containing string lookup of pixel format IDC. + * @warning this must be kept in step with PIXEL_FormatIdc. + */ +unsigned char pix_fmt_idc_names[6][16] = { + "Monochrome", + "4:1:1", + "4:2:0", + "4:2:2", + "4:4:4", + "Invalid", +}; + +static int pixel_compare_pixfmts(const void *a, const void *b) +{ + return ((struct pixel_pixinfo *)a)->pixfmt - + ((struct pixel_pixinfo *)b)->pixfmt; +} + +static struct pixel_info* +pixel_get_bufinfo_from_pixfmt(enum img_pixfmt pix_fmt) +{ + struct pixel_pixinfo_table *pixinfo_table_entry = NULL; + struct pixel_info *pix_info = NULL; + + pixinfo_table_entry = pixel_get_pixelinfo_from_pixfmt(pix_fmt); + VDEC_ASSERT(pixinfo_table_entry); + if (pixinfo_table_entry) + pix_info = &pixinfo_table_entry->info; + + return pix_info; +} + +/* + * @brief Search a pixel format based on its attributes rather than its format + * enum. + * @warning use PIXEL_Comparpix_fmts to search by enum + */ +static int pixel_compare_pixinfo(const void *a, const void *b) +{ + int result = 0; + const struct pixel_pixinfo *fmt_a = (struct pixel_pixinfo *)a; + const struct pixel_pixinfo *fmt_b = (struct pixel_pixinfo *)b; + + result = fmt_a->chroma_fmt_idc - fmt_b->chroma_fmt_idc; + if (result != 0) + return result; + + result = fmt_a->mem_pkg - fmt_b->mem_pkg; + if (result != 0) + return result; + + result = fmt_a->chroma_interleave - fmt_b->chroma_interleave; + if (result != 0) + return result; + + result = fmt_a->bitdepth_y - fmt_b->bitdepth_y; + if (result != 0) + return result; + + result = fmt_a->bitdepth_c - fmt_b->bitdepth_c; + if (result != 0) + return result; + + result = fmt_a->num_planes - fmt_b->num_planes; + if (result != 0) + return result; + + return result; +} + +static void pixel_init_search(void) +{ + static unsigned int search_inited; + + search_inited++; + if (search_inited == 1) { + if (!def_fmt) { + int i = 0; + + i = NUM_OF_FORMATS - 1; + while (i >= 0) { + if (IMG_PIXFMT_UNDEFINED == + pix_fmts[i].pixfmt) { + def_fmt = &pix_fmts[i]; + break; + } + } + VDEC_ASSERT(def_fmt); + } + } else { + search_inited--; + } +} + +static struct pixel_pixinfo *pixel_search_fmt(const struct pixel_pixinfo *key, + unsigned char enum_only) +{ + struct pixel_pixinfo *fmt_found = NULL; + int (*compar)(const void *pixfmt1, const void *pixfmt2); + + if (enum_only) + compar = &pixel_compare_pixfmts; + else + compar = &pixel_compare_pixinfo; + + { + unsigned int i; + + for (i = 0; i < NUM_OF_FORMATS; i++) { + if (compar(key, &pix_fmts[i]) == 0) { + fmt_found = &pix_fmts[i]; + break; + } + } + } + return fmt_found; +} + +/* + * @brief Set a pixel format info structure to the default. + * @warning This MODIDIFES the pointer therefore you shouldn't + * call it on pointer you got from the library! + */ +static void pixel_pixinfo_defaults(struct pixel_pixinfo *to_def) +{ + if (!def_fmt) + pixel_init_search(); + + memcpy(to_def, def_fmt, sizeof(struct pixel_pixinfo)); +} + +enum img_pixfmt pixel_get_pixfmt(enum pixel_fmt_idc chroma_fmt_idc, + enum pixel_chroma_interleaved + chroma_interleaved, + enum pixel_mem_packing mem_pkg, + unsigned int bitdepth_y, unsigned int bitdepth_c, + unsigned int num_planes) +{ + unsigned int internal_num_planes = (num_planes == 0 || num_planes > 4) ? 2 : + num_planes; + struct pixel_pixinfo key; + struct pixel_pixinfo *fmt_found = NULL; + + if (chroma_fmt_idc != PIXEL_FORMAT_MONO && + chroma_fmt_idc != PIXEL_FORMAT_411 && + chroma_fmt_idc != PIXEL_FORMAT_420 && + chroma_fmt_idc != PIXEL_FORMAT_422 && + chroma_fmt_idc != PIXEL_FORMAT_444) + return IMG_PIXFMT_UNDEFINED; + + /* valid bit depth 8, 9, 10, or 16/0 for 422 */ + if (bitdepth_y < 8 || bitdepth_y > 10) + return IMG_PIXFMT_UNDEFINED; + + /* valid bit depth 8, 9, 10, or 16/0 for 422 */ + if (bitdepth_c < 8 || bitdepth_c > 10) + return IMG_PIXFMT_UNDEFINED; + + key.pixfmt = IMG_PIXFMT_UNDEFINED; + key.chroma_fmt_idc = chroma_fmt_idc; + key.chroma_interleave = chroma_interleaved; + key.mem_pkg = mem_pkg; + key.bitdepth_y = bitdepth_y; + key.bitdepth_c = bitdepth_c; + key.num_planes = internal_num_planes; + + /* + * 9 and 10 bits formats are handled in the same way, and there is only + * one entry in the PixelFormat table + */ + if (key.bitdepth_y == 9) + key.bitdepth_y = 10; + + /* + * 9 and 10 bits formats are handled in the same way, and there is only + * one entry in the PixelFormat table + */ + if (key.bitdepth_c == 9) + key.bitdepth_c = 10; + + pixel_init_search(); + + /* do not search by format */ + fmt_found = pixel_search_fmt(&key, FALSE); + if (!fmt_found) + return IMG_PIXFMT_UNDEFINED; + + return fmt_found->pixfmt; +} + +static void pixel_get_internal_pixelinfo(struct pixel_pixinfo *pixinfo, + struct pixel_info *pix_bufinfo) +{ + if (pixinfo->bitdepth_y == 8 && pixinfo->bitdepth_c == 8) + pix_bufinfo->pixels_in_bop = 16; + else if (pixinfo->mem_pkg == PIXEL_BIT10_MP) + pix_bufinfo->pixels_in_bop = 12; + else + pix_bufinfo->pixels_in_bop = 8; + + if (pixinfo->bitdepth_y == 8) + pix_bufinfo->ybytes_in_bop = pix_bufinfo->pixels_in_bop; + else + pix_bufinfo->ybytes_in_bop = 16; + + if (pixinfo->chroma_fmt_idc == PIXEL_FORMAT_MONO) { + pix_bufinfo->uvbytes_in_bop = 0; + } else if (pixinfo->bitdepth_c == 8) { + pix_bufinfo->uvbytes_in_bop = pix_bufinfo->pixels_in_bop; + if (pixinfo->chroma_fmt_idc == PIXEL_FORMAT_422 && pixinfo->num_planes == 1) { + pix_bufinfo->uvbytes_in_bop = 0; + pix_bufinfo->pixels_in_bop = 8; + } + } else { + pix_bufinfo->uvbytes_in_bop = 16; + } + + if (pixinfo->chroma_fmt_idc == PIXEL_FORMAT_444) + pix_bufinfo->uvbytes_in_bop *= 2; + + if (pixinfo->chroma_interleave == PIXEL_INVALID_CI) { + pix_bufinfo->uvbytes_in_bop /= 2; + pix_bufinfo->vbytes_in_bop = pix_bufinfo->uvbytes_in_bop; + } else { + pix_bufinfo->vbytes_in_bop = 0; + } + + pix_bufinfo->alphabytes_in_bop = 0; + + if (pixinfo->num_planes == 1) + pix_bufinfo->is_planar = FALSE; + else + pix_bufinfo->is_planar = TRUE; + + if (pixinfo->chroma_fmt_idc == PIXEL_FORMAT_420) + pix_bufinfo->uv_height_halved = TRUE; + else + pix_bufinfo->uv_height_halved = FALSE; + + if (pixinfo->chroma_fmt_idc == PIXEL_FORMAT_444) + pix_bufinfo->uv_stride_ratio_times4 = 8; + else + pix_bufinfo->uv_stride_ratio_times4 = 4; + + if (pixinfo->chroma_interleave == PIXEL_INVALID_CI) + pix_bufinfo->uv_stride_ratio_times4 /= 2; + + pix_bufinfo->has_alpha = FALSE; +} + +static void pixel_yuv_get_descriptor_int(struct pixel_info *pixinfo, + struct img_pixfmt_desc *pix_desc) +{ + pix_desc->bop_denom = pixinfo->pixels_in_bop; + pix_desc->h_denom = (pixinfo->uv_stride_ratio_times4 == 2 || + !pixinfo->is_planar) ? 2 : 1; + pix_desc->v_denom = (pixinfo->uv_height_halved || !pixinfo->is_planar) + ? 2 : 1; + + pix_desc->planes[0] = TRUE; + pix_desc->bop_numer[0] = pixinfo->ybytes_in_bop; + pix_desc->h_numer[0] = pix_desc->h_denom; + pix_desc->v_numer[0] = pix_desc->v_denom; + + pix_desc->planes[1] = pixinfo->is_planar; + pix_desc->bop_numer[1] = pixinfo->uvbytes_in_bop; + pix_desc->h_numer[1] = (pix_desc->h_denom * pixinfo->uv_stride_ratio_times4) / 4; + pix_desc->v_numer[1] = 1; + + pix_desc->planes[2] = (pixinfo->vbytes_in_bop > 0) ? TRUE : FALSE; + pix_desc->bop_numer[2] = pixinfo->vbytes_in_bop; + pix_desc->h_numer[2] = (pixinfo->vbytes_in_bop > 0) ? 1 : 0; + pix_desc->v_numer[2] = (pixinfo->vbytes_in_bop > 0) ? 1 : 0; + + pix_desc->planes[3] = pixinfo->has_alpha; + pix_desc->bop_numer[3] = pixinfo->alphabytes_in_bop; + pix_desc->h_numer[3] = pix_desc->h_denom; + pix_desc->v_numer[3] = pix_desc->v_denom; +} + +int pixel_yuv_get_desc(struct pixel_pixinfo *pix_info, struct img_pixfmt_desc *pix_desc) +{ + struct pixel_info int_pix_info; + + struct pixel_info *int_pix_info_old = NULL; + enum img_pixfmt pix_fmt = pixel_get_pixfmt(pix_info->chroma_fmt_idc, + pix_info->chroma_interleave, + pix_info->mem_pkg, + pix_info->bitdepth_y, + pix_info->bitdepth_c, + pix_info->num_planes); + + /* Validate the output from new function. */ + if (pix_fmt != IMG_PIXFMT_UNDEFINED) + int_pix_info_old = pixel_get_bufinfo_from_pixfmt(pix_fmt); + + pixel_get_internal_pixelinfo(pix_info, &int_pix_info); + + if (int_pix_info_old) { + VDEC_ASSERT(int_pix_info_old->has_alpha == + int_pix_info.has_alpha); + VDEC_ASSERT(int_pix_info_old->is_planar == + int_pix_info.is_planar); + VDEC_ASSERT(int_pix_info_old->uv_height_halved == + int_pix_info.uv_height_halved); + VDEC_ASSERT(int_pix_info_old->alphabytes_in_bop == + int_pix_info.alphabytes_in_bop); + VDEC_ASSERT(int_pix_info_old->pixels_in_bop == + int_pix_info.pixels_in_bop); + VDEC_ASSERT(int_pix_info_old->uvbytes_in_bop == + int_pix_info.uvbytes_in_bop); + VDEC_ASSERT(int_pix_info_old->uv_stride_ratio_times4 == + int_pix_info.uv_stride_ratio_times4); + VDEC_ASSERT(int_pix_info_old->vbytes_in_bop == + int_pix_info.vbytes_in_bop); + VDEC_ASSERT(int_pix_info_old->ybytes_in_bop == + int_pix_info.ybytes_in_bop); + } + + pixel_yuv_get_descriptor_int(&int_pix_info, pix_desc); + + return IMG_SUCCESS; +} + +struct pixel_pixinfo *pixel_get_pixinfo(const enum img_pixfmt pix_fmt) +{ + struct pixel_pixinfo key; + struct pixel_pixinfo *fmt_found = NULL; + + pixel_init_search(); + pixel_pixinfo_defaults(&key); + key.pixfmt = pix_fmt; + + fmt_found = pixel_search_fmt(&key, TRUE); + if (!fmt_found) + return def_fmt; + return fmt_found; +} + +int pixel_get_fmt_desc(enum img_pixfmt pix_fmt, struct img_pixfmt_desc *pix_desc) +{ + if (pix_fmt >= IMG_PIXFMT_ARBPLANAR8 && pix_fmt <= IMG_PIXFMT_ARBPLANAR8_LAST) { + unsigned int i; + unsigned short spec; + + pix_desc->bop_denom = 1; + pix_desc->h_denom = 1; + pix_desc->v_denom = 1; + + spec = (pix_fmt - IMG_PIXFMT_ARBPLANAR8) & 0xffff; + for (i = 0; i < FACT_SPEC_FORMAT_NUM_PLANES; i++) { + unsigned char code = (spec >> FACT_SPEC_FORMAT_PLANE_CODE_BITS * + (FACT_SPEC_FORMAT_NUM_PLANES - 1 - i)) & 0xf; + pix_desc->bop_numer[i] = 1; + pix_desc->h_numer[i] = ((code >> 2) & FACT_SPEC_FORMAT_PLANE_CODE_MASK) + + FACT_SPEC_FORMAT_MIN_FACT_VAL; + pix_desc->v_numer[i] = (code & FACT_SPEC_FORMAT_PLANE_CODE_MASK) + + FACT_SPEC_FORMAT_MIN_FACT_VAL; + if (i == 0 || code != FACT_SPEC_FORMAT_PLANE_UNUSED) { + pix_desc->planes[i] = TRUE; + + pix_desc->h_denom = + pix_desc->h_denom > pix_desc->h_numer[i] ? + pix_desc->h_denom : pix_desc->h_numer[i]; + + pix_desc->v_denom = + pix_desc->v_denom > pix_desc->v_numer[i] ? + pix_desc->v_denom : pix_desc->v_numer[i]; + } else { + pix_desc->planes[i] = FALSE; + } + } + } else { + struct pixel_info *info = + pixel_get_bufinfo_from_pixfmt(pix_fmt); + if (!info) { + VDEC_ASSERT(0); + return -EINVAL; + } + + pixel_yuv_get_descriptor_int(info, pix_desc); + } + + return IMG_SUCCESS; +} + +int pixel_gen_pixfmt(enum img_pixfmt *pix_fmt, struct img_pixfmt_desc *pix_desc) +{ + unsigned short spec = 0, i; + unsigned char code; + + for (i = 0; i < FACT_SPEC_FORMAT_NUM_PLANES; i++) { + if (pix_desc->planes[i] != 1) { + code = FACT_SPEC_FORMAT_PLANE_UNUSED; + } else { + code = (((pix_desc->h_numer[i] - FACT_SPEC_FORMAT_MIN_FACT_VAL) & + FACT_SPEC_FORMAT_PLANE_CODE_MASK) << 2) | + ((pix_desc->v_numer[i] - FACT_SPEC_FORMAT_MIN_FACT_VAL) & + FACT_SPEC_FORMAT_PLANE_CODE_MASK); + } + spec |= (code << FACT_SPEC_FORMAT_PLANE_CODE_BITS * + (FACT_SPEC_FORMAT_NUM_PLANES - 1 - i)); + } + + *pix_fmt = (enum img_pixfmt)(IMG_PIXFMT_ARBPLANAR8 | spec); + + return 0; +} diff --git a/drivers/staging/media/vxd/decoder/pixel_api.h b/drivers/staging/media/vxd/decoder/pixel_api.h new file mode 100644 index 000000000000..3648c1b32ea7 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/pixel_api.h @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Pixel processing functions header + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Sunita Nadampalli + * + * Re-written for upstream + * Sidraya Jayagond + */ + +#ifndef __PIXEL_API_H__ +#define __PIXEL_API_H__ + +#include + +#include "img_errors.h" +#include "img_pixfmts.h" + +#define PIXEL_MULTICHROME TRUE +#define PIXEL_MONOCHROME FALSE +#define IMG_MAX_NUM_PLANES 4 +#define PIXEL_INVALID_BDC 8 + +extern unsigned char pix_fmt_idc_names[6][16]; + +struct img_pixfmt_desc { + unsigned char planes[IMG_MAX_NUM_PLANES]; + unsigned int bop_denom; + unsigned int bop_numer[IMG_MAX_NUM_PLANES]; + unsigned int h_denom; + unsigned int v_denom; + unsigned int h_numer[IMG_MAX_NUM_PLANES]; + unsigned int v_numer[IMG_MAX_NUM_PLANES]; +}; + +/* + * @brief This type defines memory chroma interleaved order + */ +enum pixel_chroma_interleaved { + PIXEL_INVALID_CI = 0, + PIXEL_UV_ORDER = 1, + PIXEL_VU_ORDER = 2, + PIXEL_YAYB_ORDER = 4, + PIXEL_AYBY_ORDER = 8, + PIXEL_ORDER_FORCE32BITS = 0x7FFFFFFFU +}; + +/* + * @brief This macro translates enum pixel_chroma_interleaved values into + * value that can be used to write HW registers directly. + */ +#define PIXEL_GET_HW_CHROMA_INTERLEAVED(value) \ + ((value) & PIXEL_VU_ORDER ? TRUE : FALSE) + +/* + * @brief This type defines memory packing types + */ +enum pixel_mem_packing { + PIXEL_BIT8_MP = 0, + PIXEL_BIT10_MSB_MP = 1, + PIXEL_BIT10_LSB_MP = 2, + PIXEL_BIT10_MP = 3, + PIXEL_DEFAULT_MP = 0xff, + PIXEL_DEFAULT_FORCE32BITS = 0x7FFFFFFFU +}; + +static inline unsigned char pixel_get_hw_memory_packing(enum pixel_mem_packing value) +{ + return value == PIXEL_BIT8_MP ? FALSE : + value == PIXEL_BIT10_MSB_MP ? FALSE : + value == PIXEL_BIT10_LSB_MP ? FALSE : + value == PIXEL_BIT10_MP ? TRUE : FALSE; +} + +/* + * @brief This type defines chroma formats + */ +enum pixel_fmt_idc { + PIXEL_FORMAT_MONO = 0, + PIXEL_FORMAT_411 = 1, + PIXEL_FORMAT_420 = 2, + PIXEL_FORMAT_422 = 3, + PIXEL_FORMAT_444 = 4, + PIXEL_FORMAT_INVALID = 0xFF, + PIXEL_FORMAT_FORCE32BITS = 0x7FFFFFFFU +}; + +static inline int pixel_get_hw_chroma_format_idc(enum pixel_fmt_idc value) +{ + return value == PIXEL_FORMAT_MONO ? 0 : + value == PIXEL_FORMAT_420 ? 1 : + value == PIXEL_FORMAT_422 ? 2 : + value == PIXEL_FORMAT_444 ? 3 : + PIXEL_FORMAT_INVALID; +} + +/* + * @brief This structure contains information about the pixel formats + */ +struct pixel_pixinfo { + enum img_pixfmt pixfmt; + enum pixel_chroma_interleaved chroma_interleave; + unsigned char chroma_fmt; + enum pixel_mem_packing mem_pkg; + enum pixel_fmt_idc chroma_fmt_idc; + unsigned int bitdepth_y; + unsigned int bitdepth_c; + unsigned int num_planes; +}; + +/* + * @brief This type defines the image in memory + */ +struct pixel_info { + unsigned int pixels_in_bop; + unsigned int ybytes_in_bop; + unsigned int uvbytes_in_bop; + unsigned int vbytes_in_bop; + unsigned int alphabytes_in_bop; + unsigned char is_planar; + unsigned char uv_height_halved; + unsigned int uv_stride_ratio_times4; + unsigned char has_alpha; +}; + +struct pixel_pixinfo_table { + enum img_pixfmt pix_color_fmt; + struct pixel_info info; +}; + +struct pixel_pixinfo *pixel_get_pixinfo(const enum img_pixfmt pixfmt); + +enum img_pixfmt pixel_get_pixfmt(enum pixel_fmt_idc chroma_fmt_idc, + enum pixel_chroma_interleaved + chroma_interleaved, + enum pixel_mem_packing mem_packing, + unsigned int bitdepth_y, unsigned int bitdepth_c, + unsigned int num_planes); + +int pixel_yuv_get_desc(struct pixel_pixinfo *pix_info, + struct img_pixfmt_desc *desc); + +int pixel_get_fmt_desc(enum img_pixfmt pixfmt, + struct img_pixfmt_desc *fmt_desc); + +int pixel_gen_pixfmt(enum img_pixfmt *pix_fmt, struct img_pixfmt_desc *pix_desc); + +#endif From patchwork Wed Aug 18 14:10:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499236 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC8B5C432BE for ; Wed, 18 Aug 2021 14:14:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 91F5B610CD for ; Wed, 18 Aug 2021 14:14:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239491AbhHROPR (ORCPT ); Wed, 18 Aug 2021 10:15:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239816AbhHROOi (ORCPT ); Wed, 18 Aug 2021 10:14:38 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDECEC061292 for ; Wed, 18 Aug 2021 07:13:00 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id t13so2257187pfl.6 for ; Wed, 18 Aug 2021 07:13:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=UoF9oRxuVpMIWSpsGcPIgXiGbwmC+9kSQcoHcV3Th5Q=; b=J1xQfVauAUNOMThVPdt/pBnVf3Jx7H/CADw3j/9sNi6283S6/69wCQCsTfOEkf0KLx aWFNWv6oVpVoLfiPFeB/v2zdLweqD8VmLrHu0nye0NiXdF9R2sLKXyIksT4bbquoqgSh kbg7uln4j5Is6EC8ia/a+kb38/yO4TE/xGuGFsUUm2/4fS86kViCzEOHXDURPWqJe866 9ammYaAyS/GhryvmEyKUoSpHzmWRez9dToMGJ9hH5lgimELEHz3uwZ7iXqXLH2R/XBAw Z5KhZaqeveFk4FMhvRd2Ncb1daNYTD9xM9G/tWpMzl2nYEG9GKYsUdi8dfOe9HEWpiMH fC5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=UoF9oRxuVpMIWSpsGcPIgXiGbwmC+9kSQcoHcV3Th5Q=; b=FmO/IrEnSf1hbBG13ltAbf45W6wfaoyFbla7HlCjRtTKCKaFi9ZNTs9UkZQcuSNPUi 3Yl7VWYj0swyiComu2QAhHcGK4eGW3tXoay/V9XyDvUGYH+WKPT96RZq3uo+B/EMP/2G BQgSwiHv320P+ZBCqtN7vgRYAV0oKP8wBkeMKg9eG1MSI8adsD0vihxAtJbHynChV4Gj x807txT44cla+tPanTJ4HClDylnKR6Fms6pfH+L+GmqzSR60aLrOXU3czaL4ATNQMBk4 vSmBWEC5D+bpkyA3Po02/mrUodiqOEREo7T8syXIfoeGIPTrXofGxaiJBtZ4iNP3vWAt HBdg== MIME-Version: 1.0 X-Gm-Message-State: AOAM531fSBn2i5y627kW6ZPU459yC8iQolXQjrrm5D4f31ay35O4CN3S LtsMq5OKNJAqEnImKOOFXFbR8xcPAFSHI1WMxajMqoZUUZnmrgmOcKlGx5jIAGn/QEZUvOCmt9a fZfz3NQmccjc6Sn9S X-Google-Smtp-Source: ABdhPJzOry8fJBbwHrmWZv2ajXfOkcBvuyNbxx1MBTs0ObJLYvK5x4Wb/mI29tzjpXL/D8sVVdWX/g== X-Received: by 2002:a63:5f14:: with SMTP id t20mr5922172pgb.433.1629295980330; Wed, 18 Aug 2021 07:13:00 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.12.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:12:59 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 20/30] v4l:vxd-dec:Decoder resource component Date: Wed, 18 Aug 2021 19:40:27 +0530 Message-Id: <20210818141037.19990-21-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya Decoder resource component is responsible for allocating all the internal, auxiliary resources required for decoder and maintain their life cycle. Signed-off-by: Sunita Nadampalli Signed-off-by: Sidraya --- MAINTAINERS | 2 + .../staging/media/vxd/decoder/dec_resources.c | 554 ++++++++++++++++++ .../staging/media/vxd/decoder/dec_resources.h | 46 ++ 3 files changed, 602 insertions(+) create mode 100644 drivers/staging/media/vxd/decoder/dec_resources.c create mode 100644 drivers/staging/media/vxd/decoder/dec_resources.h diff --git a/MAINTAINERS b/MAINTAINERS index c7edc60f4d5b..6dadec058ab3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19569,6 +19569,8 @@ F: drivers/staging/media/vxd/common/work_queue.h F: drivers/staging/media/vxd/decoder/bspp.c F: drivers/staging/media/vxd/decoder/bspp.h F: drivers/staging/media/vxd/decoder/bspp_int.h +F: drivers/staging/media/vxd/decoder/dec_resources.c +F: drivers/staging/media/vxd/decoder/dec_resources.h F: drivers/staging/media/vxd/decoder/fw_interface.h F: drivers/staging/media/vxd/decoder/h264_idx.h F: drivers/staging/media/vxd/decoder/h264_secure_parser.c diff --git a/drivers/staging/media/vxd/decoder/dec_resources.c b/drivers/staging/media/vxd/decoder/dec_resources.c new file mode 100644 index 000000000000..e993a45eb540 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/dec_resources.c @@ -0,0 +1,554 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * VXD Decoder resource allocation and tracking function implementations + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Sunita Nadampalli + * + * Re-written for upstream + * Sidraya Jayagond + * Prashanth Kumar Amai + */ + +#include +#include +#include +#include + +#include "decoder.h" +#include "dec_resources.h" +#include "hw_control.h" +#include "h264fw_data.h" +#include "h264_idx.h" +#include "h264_vlc.h" +#include "img_mem.h" +#include "pool_api.h" +#include "vdecdd_utils.h" +#include "vdec_mmu_wrapper.h" +#include "vid_buf.h" +#include "vxd_mmu_defs.h" + +#define DECODER_END_BYTES_SIZE 40 + +#define BATCH_MSG_BUFFER_SIZE (8 * 4096) +#define INTRA_BUF_SIZE (1024 * 32) +#define AUX_LINE_BUFFER_SIZE (512 * 1024) + +static void decres_pack_vlc_tables(unsigned short *packed, + unsigned short *unpacked, + unsigned short size) +{ + unsigned short i, j; + + for (i = 0; i < size; i++) { + j = i * 3; + /* + * opcode 14:12 + * width 11:9 + * symbol 8:0 + */ + packed[i] = 0 | ((unpacked[j]) << 12) | + ((unpacked[j + 1]) << 9) | (unpacked[j + 2]); + } +} + +struct dec_vlctable { + void *data; + unsigned int num_entries; + void *index_table; + unsigned int num_tables; +}; + +/* + * Union with sizes of firmware parser header structure sizes. Dec_resources + * uses the largest to allocate the header buffer. + */ +union decres_fw_hdrs { + struct h264fw_header_data h264_header; +}; + +/* + * This array contains the size of each resource allocation. + * @brief Resource Allocation Sizes + * NOTE: This should be kept in step with #DECODER_eResType. + */ +static const unsigned int res_size[DECODER_RESTYPE_MAX] = { + sizeof(struct vdecfw_transaction), + sizeof(union decres_fw_hdrs), + BATCH_MSG_BUFFER_SIZE, +#ifdef HAS_HEVC + MEM_TO_REG_BUF_SIZE + SLICE_PARAMS_BUF_SIZE + ABOVE_PARAMS_BUF_SIZE, +#endif +}; + +static const unsigned char start_code[] = { + 0x00, 0x00, 0x01, 0x00, +}; + +static void decres_get_vlc_data(struct dec_vlctable *vlc_table, + enum vdec_vid_std vid_std) +{ + switch (vid_std) { + case VDEC_STD_H264: + vlc_table->data = h264_vlc_table_data; + vlc_table->num_entries = h264_vlc_table_size; + vlc_table->index_table = h264_vlc_index_data; + vlc_table->num_tables = h264_vlc_index_size; + break; + + default: + memset(vlc_table, 0x0, sizeof(*vlc_table)); + break; + } +} + +static void decres_fnbuf_info_destructor(void *param, void *cb_handle) +{ + struct vidio_ddbufinfo *dd_bufinfo = (struct vidio_ddbufinfo *)param; + int ret; + void *mmu_handle = cb_handle; + + VDEC_ASSERT(dd_bufinfo); + + ret = mmu_free_mem(mmu_handle, dd_bufinfo); + VDEC_ASSERT(ret == IMG_SUCCESS); + + kfree(dd_bufinfo); + dd_bufinfo = NULL; +} + +int dec_res_picture_detach(void **res_ctx, struct dec_decpict *dec_pict) +{ + struct dec_res_ctx *local_res_ctx; + + VDEC_ASSERT(res_ctx); + VDEC_ASSERT(res_ctx && *res_ctx); + VDEC_ASSERT(dec_pict); + VDEC_ASSERT(dec_pict && dec_pict->transaction_info); + + if (!res_ctx || !(*res_ctx) || !dec_pict || + !dec_pict->transaction_info) { + pr_err("Invalid parameters\n"); + return IMG_ERROR_INVALID_PARAMETERS; + } + + local_res_ctx = (struct dec_res_ctx *)*res_ctx; + + /* return transaction buffer */ + lst_add(&local_res_ctx->pool_data_list[DECODER_RESTYPE_TRANSACTION], + dec_pict->transaction_info); + pool_resfree(dec_pict->transaction_info->res); + + /* return picture header information buffer */ + lst_add(&local_res_ctx->pool_data_list[DECODER_RESTYPE_HDR], + dec_pict->hdr_info); + pool_resfree(dec_pict->hdr_info->res); + + /* return batch message buffer */ + lst_add(&local_res_ctx->pool_data_list[DECODER_RESTYPE_BATCH_MSG], + dec_pict->batch_msginfo); + pool_resfree(dec_pict->batch_msginfo->res); + +#ifdef HAS_HEVC + if (dec_pict->pvdec_info) { + lst_add(&local_res_ctx->pool_data_list[DECODER_RESTYPE_PVDEC_BUF], + dec_pict->pvdec_info); + pool_resfree(dec_pict->pvdec_info->res); + } +#endif + + return IMG_SUCCESS; +} + +static int decres_get_resource(struct dec_res_ctx *res_ctx, + enum dec_res_type res_type, + struct res_resinfo **res_info, + unsigned char fill_zeros) +{ + struct res_resinfo *local_res_info = NULL; + unsigned int ret = IMG_SUCCESS; + + VDEC_ASSERT(res_ctx); + VDEC_ASSERT(res_info); + + local_res_info = lst_removehead(&res_ctx->pool_data_list[res_type]); + VDEC_ASSERT(local_res_info); + if (local_res_info) { + VDEC_ASSERT(local_res_info->ddbuf_info); + if (local_res_info->ddbuf_info) { + ret = pool_resalloc(res_ctx->res_pool[res_type], local_res_info->res); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) { + ret = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE; + return ret; + } + + if (fill_zeros) + memset(local_res_info->ddbuf_info->cpu_virt, 0, + local_res_info->ddbuf_info->buf_size); + + *res_info = local_res_info; + } else { + ret = IMG_ERROR_FATAL; + return ret; + } + } else { + ret = IMG_ERROR_COULD_NOT_OBTAIN_RESOURCE; + return ret; + } + + return ret; +} + +int dec_res_picture_attach(void **res_ctx, enum vdec_vid_std vid_std, + struct dec_decpict *dec_pict) +{ + struct dec_res_ctx *local_res_ctx; + int ret; + + VDEC_ASSERT(res_ctx); + VDEC_ASSERT(res_ctx && *res_ctx); + VDEC_ASSERT(dec_pict); + if (!res_ctx || !(*res_ctx) || !dec_pict) { + pr_err("Invalid parameters"); + return IMG_ERROR_INVALID_PARAMETERS; + } + + local_res_ctx = (struct dec_res_ctx *)*res_ctx; + + /* Obtain transaction buffer. */ + ret = decres_get_resource(local_res_ctx, DECODER_RESTYPE_TRANSACTION, + &dec_pict->transaction_info, TRUE); + + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + return ret; + + /* Obtain picture header information buffer */ + ret = decres_get_resource(local_res_ctx, DECODER_RESTYPE_HDR, + &dec_pict->hdr_info, TRUE); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + return ret; + +#ifdef HAS_HEVC + /* Obtain HEVC buffer */ + if (vid_std == VDEC_STD_HEVC) { + ret = decres_get_resource(local_res_ctx, DECODER_RESTYPE_PVDEC_BUF, + &dec_pict->pvdec_info, TRUE); + + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + return ret; + } +#endif + /* Obtain picture batch message buffer */ + ret = decres_get_resource(local_res_ctx, DECODER_RESTYPE_BATCH_MSG, + &dec_pict->batch_msginfo, TRUE); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + return ret; + + dec_pict->intra_bufinfo = &local_res_ctx->intra_bufinfo; + dec_pict->auxline_bufinfo = &local_res_ctx->auxline_bufinfo; + dec_pict->vlc_tables_bufinfo = + &local_res_ctx->vlc_tables_bufinfo[vid_std]; + dec_pict->vlc_idx_tables_bufinfo = + &local_res_ctx->vlc_idxtables_bufinfo[vid_std]; + dec_pict->start_code_bufinfo = &local_res_ctx->start_code_bufinfo; + + return IMG_SUCCESS; +} + +int dec_res_create(void *mmu_handle, struct vxd_coreprops *core_props, + unsigned int num_dec_slots, + unsigned int mem_heap_id, void **resources) +{ + struct dec_res_ctx *local_res_ctx; + int ret; + unsigned int i = 0; + struct dec_vlctable vlc_table; + enum sys_emem_attrib mem_attrib; + + VDEC_ASSERT(core_props); + VDEC_ASSERT(resources); + if (!core_props || !resources) { + pr_err("Invalid parameters"); + return IMG_ERROR_INVALID_PARAMETERS; + } + + mem_attrib = (enum sys_emem_attrib)(SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE); + mem_attrib |= (enum sys_emem_attrib)SYS_MEMATTRIB_INTERNAL; + + local_res_ctx = kzalloc(sizeof(*local_res_ctx), GFP_KERNEL); + VDEC_ASSERT(local_res_ctx); + if (!local_res_ctx) + return IMG_ERROR_OUT_OF_MEMORY; + + /* Allocate Intra buffer. */ +#ifdef DEBUG_DECODER_DRIVER + pr_info("%s:%d call MMU_StreamMalloc", __func__, __LINE__); +#endif + + ret = mmu_stream_alloc(mmu_handle, MMU_HEAP_STREAM_BUFFERS, mem_heap_id, + mem_attrib, + core_props->num_pixel_pipes * + INTRA_BUF_SIZE * 3, + DEV_MMU_PAGE_ALIGNMENT, + &local_res_ctx->intra_bufinfo); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + goto error; + + /* Allocate aux line buffer. */ +#ifdef DEBUG_DECODER_DRIVER + pr_info("%s:%d call MMU_StreamMalloc", __func__, __LINE__); +#endif + ret = mmu_stream_alloc(mmu_handle, MMU_HEAP_STREAM_BUFFERS, mem_heap_id, + mem_attrib, + AUX_LINE_BUFFER_SIZE * 3 * + core_props->num_pixel_pipes, + DEV_MMU_PAGE_ALIGNMENT, + &local_res_ctx->auxline_bufinfo); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + goto error; + + /* Allocate standard-specific buffers. */ + for (i = VDEC_STD_UNDEFINED + 1; i < VDEC_STD_MAX; i++) { + decres_get_vlc_data(&vlc_table, (enum vdec_vid_std)i); + + if (vlc_table.num_tables > 0) { + /* + * Size of VLC IDX table in bytes. Has to be aligned + * to 4, so transfer to MTX succeeds. + * (VLC IDX is copied to local RAM of MTX) + */ + unsigned int vlc_idxtable_sz = + ALIGN((sizeof(unsigned short) * vlc_table.num_tables * 3), 4); + +#ifdef DEBUG_DECODER_DRIVER + pr_info(" %s:%d calling MMU_StreamMalloc", __func__, __LINE__); +#endif + + ret = mmu_stream_alloc(mmu_handle, + MMU_HEAP_STREAM_BUFFERS, + mem_heap_id, (enum sys_emem_attrib)(mem_attrib | + SYS_MEMATTRIB_CORE_READ_ONLY | + SYS_MEMATTRIB_CPU_WRITE), + sizeof(unsigned short) * vlc_table.num_entries, + DEV_MMU_PAGE_ALIGNMENT, + &local_res_ctx->vlc_tables_bufinfo[i]); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + goto error; + + if (vlc_table.data) + decres_pack_vlc_tables + (local_res_ctx->vlc_tables_bufinfo[i].cpu_virt, + vlc_table.data, + vlc_table.num_entries); + + /* VLC index table */ +#ifdef DEBUG_DECODER_DRIVER + pr_info("%s:%d calling MMU_StreamMalloc", + __func__, __LINE__); +#endif + ret = mmu_stream_alloc(mmu_handle, + MMU_HEAP_STREAM_BUFFERS, + mem_heap_id, (enum sys_emem_attrib)(mem_attrib | + SYS_MEMATTRIB_CORE_READ_ONLY | + SYS_MEMATTRIB_CPU_WRITE), + vlc_idxtable_sz, + DEV_MMU_PAGE_ALIGNMENT, + &local_res_ctx->vlc_idxtables_bufinfo[i]); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + goto error; + + if (vlc_table.index_table) + memcpy(local_res_ctx->vlc_idxtables_bufinfo[i].cpu_virt, + vlc_table.index_table, + local_res_ctx->vlc_idxtables_bufinfo[i].buf_size); + } + } + + /* Start code */ +#ifdef DEBUG_DECODER_DRIVER + pr_info("%s:%d calling MMU_StreamMalloc", __func__, __LINE__); +#endif + ret = mmu_stream_alloc(mmu_handle, MMU_HEAP_STREAM_BUFFERS, mem_heap_id, + (enum sys_emem_attrib)(mem_attrib | + SYS_MEMATTRIB_CORE_READ_ONLY | + SYS_MEMATTRIB_CPU_WRITE), + sizeof(start_code), + DEV_MMU_PAGE_ALIGNMENT, + &local_res_ctx->start_code_bufinfo); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + goto error; + + memcpy(local_res_ctx->start_code_bufinfo.cpu_virt, start_code, sizeof(start_code)); + + for (i = 0; i < DECODER_RESTYPE_MAX; i++) { + unsigned int j; + + ret = pool_api_create(&local_res_ctx->res_pool[i]); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + goto error; + + lst_init(&local_res_ctx->pool_data_list[i]); + + for (j = 0; j < num_dec_slots; j++) { + struct res_resinfo *local_res_info; + + local_res_info = kzalloc(sizeof(*local_res_info), GFP_KERNEL); + + VDEC_ASSERT(local_res_info); + if (!local_res_info) { + pr_err("Failed to allocate memory\n"); + ret = IMG_ERROR_OUT_OF_MEMORY; + goto error_local_res_info_alloc; + } + + local_res_info->ddbuf_info = kzalloc(sizeof(*local_res_info->ddbuf_info), + GFP_KERNEL); + VDEC_ASSERT(local_res_info->ddbuf_info); + if (!local_res_info->ddbuf_info) { + pr_err("Failed to allocate memory for resource buffer information structure"); + ret = IMG_ERROR_OUT_OF_MEMORY; + goto error_local_dd_buf_alloc; + } + +#ifdef DEBUG_DECODER_DRIVER + pr_info("%s:%d calling MMU_StreamMalloc", __func__, __LINE__); +#endif + ret = mmu_stream_alloc(mmu_handle, MMU_HEAP_STREAM_BUFFERS, + mem_heap_id, (enum sys_emem_attrib)(mem_attrib | + SYS_MEMATTRIB_CPU_READ | + SYS_MEMATTRIB_CPU_WRITE), + res_size[i], + DEV_MMU_PAGE_ALIGNMENT, + local_res_info->ddbuf_info); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + goto error_local_res_alloc; + + /* Register with the buffer pool */ + ret = pool_resreg(local_res_ctx->res_pool[i], + decres_fnbuf_info_destructor, + local_res_info->ddbuf_info, + sizeof(*local_res_info->ddbuf_info), + FALSE, NULL, + &local_res_info->res, mmu_handle); + VDEC_ASSERT(ret == IMG_SUCCESS); + if (ret != IMG_SUCCESS) + goto error_local_res_register; + + lst_add(&local_res_ctx->pool_data_list[i], + local_res_info); + continue; + +/* Roll back in case of local errors. */ +error_local_res_register: mmu_free_mem(mmu_handle, local_res_info->ddbuf_info); + +error_local_res_alloc: kfree(local_res_info->ddbuf_info); + +error_local_dd_buf_alloc: kfree(local_res_info); + +error_local_res_info_alloc: goto error; + } + } + + *resources = (void *)local_res_ctx; + + return IMG_SUCCESS; + +/* Roll back in case of errors. */ +error: dec_res_destroy(mmu_handle, (void *)local_res_ctx); + + return ret; +} + +/* + *@Function RESOURCES_Destroy + * + */ +int dec_res_destroy(void *mmudev_handle, void *res_ctx) +{ + int ret = IMG_SUCCESS; + int ret1 = IMG_SUCCESS; + unsigned int i = 0; + struct res_resinfo *local_res_info; + struct res_resinfo *next_res_info; + + struct dec_res_ctx *local_res_ctx = (struct dec_res_ctx *)res_ctx; + + if (!local_res_ctx) { + pr_err("Invalid parameters"); + return IMG_ERROR_INVALID_PARAMETERS; + } + + if (local_res_ctx->intra_bufinfo.hndl_memory) { + ret1 = mmu_free_mem(mmudev_handle, &local_res_ctx->intra_bufinfo); + VDEC_ASSERT(ret1 == IMG_SUCCESS); + if (ret1 != IMG_SUCCESS) + ret = ret1; + } + + if (local_res_ctx->auxline_bufinfo.hndl_memory) { + ret1 = mmu_free_mem(mmudev_handle, &local_res_ctx->auxline_bufinfo); + VDEC_ASSERT(ret1 == IMG_SUCCESS); + if (ret1 != IMG_SUCCESS) + ret = ret1; + } + + for (i = 0; i < VDEC_STD_MAX; i++) { + if (local_res_ctx->vlc_tables_bufinfo[i].hndl_memory) { + ret1 = mmu_free_mem(mmudev_handle, &local_res_ctx->vlc_tables_bufinfo[i]); + VDEC_ASSERT(ret1 == IMG_SUCCESS); + if (ret1 != IMG_SUCCESS) + ret = ret1; + } + + if (local_res_ctx->vlc_idxtables_bufinfo[i].hndl_memory) { + ret1 = mmu_free_mem(mmudev_handle, + &local_res_ctx->vlc_idxtables_bufinfo[i]); + VDEC_ASSERT(ret1 == IMG_SUCCESS); + if (ret1 != IMG_SUCCESS) + ret = ret1; + } + } + + if (local_res_ctx->start_code_bufinfo.hndl_memory) { + ret1 = mmu_free_mem(mmudev_handle, &local_res_ctx->start_code_bufinfo); + VDEC_ASSERT(ret1 == IMG_SUCCESS); + if (ret1 != IMG_SUCCESS) + ret = ret1; + } + + for (i = 0; i < DECODER_RESTYPE_MAX; i++) { + if (local_res_ctx->res_pool[i]) { + local_res_info = + lst_first(&local_res_ctx->pool_data_list[i]); + while (local_res_info) { + next_res_info = lst_next(local_res_info); + lst_remove(&local_res_ctx->pool_data_list[i], local_res_info); + ret1 = pool_resdestroy(local_res_info->res, TRUE); + VDEC_ASSERT(ret1 == IMG_SUCCESS); + if (ret1 != IMG_SUCCESS) + ret = ret1; + kfree(local_res_info); + local_res_info = next_res_info; + } + pool_destroy(local_res_ctx->res_pool[i]); + } + } + + kfree(local_res_ctx); + return ret; +} diff --git a/drivers/staging/media/vxd/decoder/dec_resources.h b/drivers/staging/media/vxd/decoder/dec_resources.h new file mode 100644 index 000000000000..d068ca57d147 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/dec_resources.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * VXD Decoder resource allocation and destroy Interface header + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Sunita Nadampalli + * + * Re-written for upstream + * Sidraya Jayagond + */ + +#ifndef _DEC_RESOURCES_H_ +#define _DEC_RESOURCES_H_ + +#include "decoder.h" +#include "lst.h" + +/* + * This structure contains the core resources. + * @brief Decoder Core Resources + */ +struct dec_res_ctx { + struct vidio_ddbufinfo intra_bufinfo; + struct vidio_ddbufinfo auxline_bufinfo; + struct vidio_ddbufinfo start_code_bufinfo; + struct vidio_ddbufinfo vlc_tables_bufinfo[VDEC_STD_MAX]; + struct vidio_ddbufinfo vlc_idxtables_bufinfo[VDEC_STD_MAX]; + void *res_pool[DECODER_RESTYPE_MAX]; + struct lst_t pool_data_list[DECODER_RESTYPE_MAX]; +}; + +int dec_res_picture_detach(void **res_ctx, struct dec_decpict *dec_pict); + +int dec_res_picture_attach(void **res_ctx, enum vdec_vid_std vid_std, + struct dec_decpict *dec_pict); + +int dec_res_create(void *mmudev_handle, + struct vxd_coreprops *core_props, unsigned int num_dec_slots, + unsigned int mem_heap_id, void **resources); + +int dec_res_destroy(void *mmudev_handle, void *res_ctx); + +#endif From patchwork Wed Aug 18 14:10:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E68BC432BE for ; Wed, 18 Aug 2021 14:14:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 126F5610A7 for ; Wed, 18 Aug 2021 14:14:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239864AbhHROPY (ORCPT ); Wed, 18 Aug 2021 10:15:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239865AbhHROOp (ORCPT ); Wed, 18 Aug 2021 10:14:45 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60140C0611BD for ; Wed, 18 Aug 2021 07:13:07 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id r2so2374760pgl.10 for ; Wed, 18 Aug 2021 07:13:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1Ky0pXaoNC1bDAfJs8xctanHGSxEgykxzzVenTSVJgk=; b=U+bebj76JeK/wR+HCEi3iT/Z5pcsrUvogq7vi14OMmYpGMH6X81Txcton/Rf8keg8x IoJnQTkXlwBeACJ0pAuN9l1yDNhULG86VGf7KVbw+lsJKdoj099re0IALqBkZvLBuyfw khtzVM1n7CU6HFmlERd2Iv0icnQR+sv1s38OBoTlT6etuAzU6YoE6ir4aPtJBchJMiye xHS+ewkcxwy0CbsLaoiviRGaR55e2cE5yoKCB7kKZQhMMCnITF7eZRJqS1XWROJEaxfi /LkuPjhe96S4d6wyTd8uKqzz3uZTlODrojYCWhzX8vKtwHRiuYMTDjfq9erzYCDr+NyJ ly/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=1Ky0pXaoNC1bDAfJs8xctanHGSxEgykxzzVenTSVJgk=; b=Hws/sHnGoVEgrr/YzscnR+dJ9FrbUrmrfpAeIwmZsNk16Wo8hl2aHv5f3qVK1ATmzv ksnHbvw3wLDLDaw5g6VzYmtVbdWMxbgl4c5SWTa4mdOkBrSGzAS0vW9KWNu5uNxn2u0+ FnpL7LxxLS7X4u/2pzfbifhdRYRxzsGnNOm5FOyt+rZw94YDGiqs2e2RceVDNvGg7IiQ SatZ3cS2B0XQTJ8vrEDnmcs8iA6m6URqpaLZyYpyTolnCqikncCdsmdvC/VHK4Exx1tn 31U07mE0DUoevb0VBhUO9dTXqedx8YYUilTNKck7dB11AuUdXzo7Dg2yr+07T54kpmr2 yNFg== MIME-Version: 1.0 X-Gm-Message-State: AOAM532WZ7hjRjCWTYcwjlODH5bvEmpVybQ5RPiuj0FzYmVAgSEF5iOt b7Y4mLcLFw/jsQYKuYL7ObHIrXobPOFxII+lc7KbPXRsery0sjaeu+QlbZY+XsGQGtcVQYNcfEm P0oklXa7xgnXyak6jwksaGJFYke4= X-Google-Smtp-Source: ABdhPJxKBlXqV+NPXK3wjR7CTe16DKH2YZ/E1P0kqBEJFLHxl/7bSovmpHkicKt3f6vcoVWk7LbHlw== X-Received: by 2002:a05:6a00:1784:b0:3e1:388:9ff9 with SMTP id s4-20020a056a00178400b003e103889ff9mr9629266pfg.40.1629295986894; Wed, 18 Aug 2021 07:13:06 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.13.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:13:06 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 22/30] v4l:vxd-dec:vdecdd headers added Date: Wed, 18 Aug 2021 19:40:29 +0530 Message-Id: <20210818141037.19990-23-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya The header file define all the high level data structures required for decoder, hwcontrol, core and v4l2 decoder components. Signed-off-by: Sunita Nadampalli Signed-off-by: Sidraya --- MAINTAINERS | 1 + .../staging/media/vxd/decoder/vdecdd_defs.h | 446 ++++++++++++++++++ 2 files changed, 447 insertions(+) create mode 100644 drivers/staging/media/vxd/decoder/vdecdd_defs.h diff --git a/MAINTAINERS b/MAINTAINERS index 41716f2916d1..fa5c69d71c3e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19618,6 +19618,7 @@ F: drivers/staging/media/vxd/decoder/translation_api.h F: drivers/staging/media/vxd/decoder/vdec_defs.h F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.c F: drivers/staging/media/vxd/decoder/vdec_mmu_wrapper.h +F: drivers/staging/media/vxd/decoder/vdecdd_defs.h F: drivers/staging/media/vxd/decoder/vdecdd_utils.c F: drivers/staging/media/vxd/decoder/vdecdd_utils.h F: drivers/staging/media/vxd/decoder/vdecdd_utils_buf.c diff --git a/drivers/staging/media/vxd/decoder/vdecdd_defs.h b/drivers/staging/media/vxd/decoder/vdecdd_defs.h new file mode 100644 index 000000000000..dc4c2695c390 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/vdecdd_defs.h @@ -0,0 +1,446 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * VXD Decoder device driver header definitions + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Sunita Nadampalli + * + * Re-written for upstream + * Sidraya Jayagond + * Prashanth Kumar Amai X-Patchwork-Id: 499234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39E60C4320E for ; Wed, 18 Aug 2021 14:15:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 19296610A7 for ; Wed, 18 Aug 2021 14:15:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239842AbhHROPk (ORCPT ); Wed, 18 Aug 2021 10:15:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239900AbhHROOq (ORCPT ); Wed, 18 Aug 2021 10:14:46 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23BB2C061148 for ; Wed, 18 Aug 2021 07:13:18 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id cp15-20020a17090afb8fb029017891959dcbso8999126pjb.2 for ; Wed, 18 Aug 2021 07:13:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GGhTinRMD5FJR5wiodAv4hSIwCbxAKQqtwlMpzVtKqQ=; b=Lfqtb6IN6Ib19r/iUcf2SQg9N1/g2TAfC3nlRZhvrWt6RcwGa2BKOItn86A+eyYPCq 2itKSJf6IeYnIYTLxTGVKR76AhL6GvGJwO/cvjPCiKK8Ad4Z6kWYdQyilzWAraVzVjWg yhNXKWvNssWewoGwa0ZhH1FgSzI8qA/ppaQLnrg4HLW1oVM+7FrU5I6N+mG2Veam20xx YH17pMajTzjuCk0ZdYjs99mZdO3/zT0t0H9UXbJja+ddIkILDiOs3l1q23CG+rsDZtJB gDnu0NbPCU3Q4VR8j+b/acz6vtgOZanDzlUjbD0MtBRqg9l8SODEKVcnHSNwjCTRgNjj LBZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=GGhTinRMD5FJR5wiodAv4hSIwCbxAKQqtwlMpzVtKqQ=; b=AFzV4uJUMp/Re9cwV5XSxtWKCNdWgl2XWLwRBdN712/kOFVHk73tUIE9wqstndewz/ FEvB3gfbHuH/caxe9GhSpPhYJHIzVSkEL0Hd/XcqRmm07rBBH8dklNvuBle9B9xmjQ0U 0S8QRdz9ZMb6lOsOe7DGN1cXkfe/zw12D5DZavVmtYFlUcaWygBwM+VFQ8odRgjBR4xD 5EYNIJHIshLxXPoKC/yHMlJ33IFPuAsq6hnQA9BvJqqlge9tyvzVBqDaQ57qXKU/S8P6 rxP+KMGgKCSfDml9GEgcScmEcJWsIehKgCj3oFcBl2IyiKsCyIV/1uZMOP64YTf6dbz9 UOSw== MIME-Version: 1.0 X-Gm-Message-State: AOAM5318FwLQc1/gnyFD5W+CgDeD3vuw6azPn55eTts8Gp/bv8prqqzm AfyUV7ykinY7wISi3fsph2vLRSTtHZQftmA5jbVNFdc7VMKE/6hRYcWEccZ5J62TBiaCMc9odfP tEFf/isHtg7YlVvMu X-Google-Smtp-Source: ABdhPJwqxRn4tzJwMf/gwXRuUcFPVz9IqUj4POIrXL3XViJfKIlZ5A87nhkIi+5p8XG3RuHPEMNLbw== X-Received: by 2002:a17:902:8f90:b029:12d:1b48:efd8 with SMTP id z16-20020a1709028f90b029012d1b48efd8mr7664129plo.23.1629295997638; Wed, 18 Aug 2021 07:13:17 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.13.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:13:16 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 25/30] v4l: videodev2: Add 10bit definitions for NV12 and NV16 color formats Date: Wed, 18 Aug 2021 19:40:32 +0530 Message-Id: <20210818141037.19990-26-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya The default color formats support only 8bit color depth. This patch adds 10bit definitions for NV12 and NV16. Signed-off-by: Sunita Nadampalli Signed-off-by: Sidraya --- drivers/media/v4l2-core/v4l2-ioctl.c | 2 ++ include/uapi/linux/videodev2.h | 2 ++ 2 files changed, 4 insertions(+) diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c index 05d5db3d85e5..445458c15168 100644 --- a/drivers/media/v4l2-core/v4l2-ioctl.c +++ b/drivers/media/v4l2-core/v4l2-ioctl.c @@ -1367,6 +1367,8 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt) case V4L2_META_FMT_VIVID: descr = "Vivid Metadata"; break; case V4L2_META_FMT_RK_ISP1_PARAMS: descr = "Rockchip ISP1 3A Parameters"; break; case V4L2_META_FMT_RK_ISP1_STAT_3A: descr = "Rockchip ISP1 3A Statistics"; break; + case V4L2_PIX_FMT_TI1210: descr = "10-bit YUV 4:2:0 (NV12)"; break; + case V4L2_PIX_FMT_TI1610: descr = "10-bit YUV 4:2:2 (NV16)"; break; default: /* Compressed formats */ diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index 9260791b8438..a71ffd686050 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -737,6 +737,8 @@ struct v4l2_pix_format { #define V4L2_PIX_FMT_SUNXI_TILED_NV12 v4l2_fourcc('S', 'T', '1', '2') /* Sunxi Tiled NV12 Format */ #define V4L2_PIX_FMT_CNF4 v4l2_fourcc('C', 'N', 'F', '4') /* Intel 4-bit packed depth confidence information */ #define V4L2_PIX_FMT_HI240 v4l2_fourcc('H', 'I', '2', '4') /* BTTV 8-bit dithered RGB */ +#define V4L2_PIX_FMT_TI1210 v4l2_fourcc('T', 'I', '1', '2') /* TI NV12 10-bit, two bytes per channel */ +#define V4L2_PIX_FMT_TI1610 v4l2_fourcc('T', 'I', '1', '6') /* TI NV16 10-bit, two bytes per channel */ /* 10bit raw bayer packed, 32 bytes for every 25 pixels, last LSB 6 bits unused */ #define V4L2_PIX_FMT_IPU3_SBGGR10 v4l2_fourcc('i', 'p', '3', 'b') /* IPU3 packed 10-bit BGGR bayer */ From patchwork Wed Aug 18 14:10:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD7BDC4320E for ; Wed, 18 Aug 2021 14:15:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA6C0610A7 for ; Wed, 18 Aug 2021 14:15:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240214AbhHROPr (ORCPT ); Wed, 18 Aug 2021 10:15:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239927AbhHROOr (ORCPT ); Wed, 18 Aug 2021 10:14:47 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D13DC06114B for ; Wed, 18 Aug 2021 07:13:21 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id om1-20020a17090b3a8100b0017941c44ce4so8955573pjb.3 for ; Wed, 18 Aug 2021 07:13:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BUH6kxlgFYFRrgWU+b5Gz04KNGiQQgwqFoq5T2N/ZkE=; b=Ry0GJP5/oBRtxmEVexPrGS1f08gSARHBoeVgzWq8lgFx+6zd+K71D6XVX8I9msM2O4 g3nYD1T/n0jv7SnzQ6NqC+bU3ZHf29kQ1LRASMEwWwZk6m7j3JGOS5dhdEDnYTnAm0mK rqIviaoYBj38JhMnDwmGum4XoLIs5BABQCfGzVxsA2gCiOIN0jecQZJkF3ncdlRSiaUe 7YNlAf+BD5u0TqTRNkVitHGGQ8HO9yviF1LbUHCWLhrh5QI8NSJE1g/tFdDi4EfRwIjQ +Pbe54A4Vr/E6xsPXKzLrCQoD27/Pr+NnID+b1eiYBKHlhQoe6JlaNMG9RCE9UI9914u ctkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=BUH6kxlgFYFRrgWU+b5Gz04KNGiQQgwqFoq5T2N/ZkE=; b=mBLImSIMQkNCWKtZHja+Glz1HWra1290aQDOB50a/gbo9AHF5NVx+L0wS5FtoB5evz i8p+AlynBxP0xwp1FAUJ6qLXWOBZQLZgWpOkmNj3BoiRD/Yln/5bjrkj5rF36q/Uvheu BWete/HgHSX1CltA8ClXs41Yv9N3lJnNZ+zPxgehKlndxBGa10RpTNGN3gMyCFTWcExd avlLoxuuyrODyfEV+QqW5Y15DpNmzFudZozTydFVOAk8kZgE6QfuDlw9pjKnXeTwQIC9 m50QT6EVOrfwrOljPsiru1MdUsWaNpZo1HzLOd/so7y0+NcAyuo3jABba9k+awtULh3S hdXw== MIME-Version: 1.0 X-Gm-Message-State: AOAM530WkMyuLUmozibEBqQUdQLpqgOiHtNRMj7mhsIXqzx7hopSupFA s4F0hLdpbn90uv5KvlfPq6wQnt5nb2QPULLLe6+tA4GJVf1Q8KKX8BE7BjVj/+oRSrWaj/CqAiA flaHxjpeUIL7GSwGi X-Google-Smtp-Source: ABdhPJwHqIbAlf4pw1G1izQ+VgFqjOEMIeH4J15+RIPpTgoYKovgQDxT7BE0wXRc8tb/5rs5FXmvZQ== X-Received: by 2002:a17:90a:b795:: with SMTP id m21mr9677712pjr.143.1629296000956; Wed, 18 Aug 2021 07:13:20 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.13.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:13:20 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 26/30] media: Kconfig: Add Video decoder kconfig and Makefile entries Date: Wed, 18 Aug 2021 19:40:33 +0530 Message-Id: <20210818141037.19990-27-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya Add video decoder to Makefile Add video decoder to Kconfig Signed-off-by: Angela Stegmaier Signed-off-by: Sidraya --- drivers/staging/media/Kconfig | 2 ++ drivers/staging/media/Makefile | 1 + 2 files changed, 3 insertions(+) diff --git a/drivers/staging/media/Kconfig b/drivers/staging/media/Kconfig index e3aaae920847..044763f8fe2e 100644 --- a/drivers/staging/media/Kconfig +++ b/drivers/staging/media/Kconfig @@ -44,4 +44,6 @@ source "drivers/staging/media/ipu3/Kconfig" source "drivers/staging/media/av7110/Kconfig" +source "drivers/staging/media/vxd/decoder/Kconfig" + endif diff --git a/drivers/staging/media/Makefile b/drivers/staging/media/Makefile index 5b5afc5b03a0..567aed1d2d43 100644 --- a/drivers/staging/media/Makefile +++ b/drivers/staging/media/Makefile @@ -11,3 +11,4 @@ obj-$(CONFIG_VIDEO_HANTRO) += hantro/ obj-$(CONFIG_VIDEO_IPU3_IMGU) += ipu3/ obj-$(CONFIG_VIDEO_ZORAN) += zoran/ obj-$(CONFIG_DVB_AV7110) += av7110/ +obj-$(CONFIG_VIDEO_IMG_VXD_DEC) += vxd/decoder/ From patchwork Wed Aug 18 14:10:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96E01C4320E for ; Wed, 18 Aug 2021 14:16:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 64D6F60EBD for ; Wed, 18 Aug 2021 14:16:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239746AbhHROQk (ORCPT ); Wed, 18 Aug 2021 10:16:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239959AbhHROOt (ORCPT ); Wed, 18 Aug 2021 10:14:49 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBAECC061151 for ; Wed, 18 Aug 2021 07:13:27 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id t1so2375668pgv.3 for ; Wed, 18 Aug 2021 07:13:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Jc0hnvG5husazj7ipIPTOEELB2619EDCrYNfEMW83oc=; b=B4iYfE3H1+Im6tXGthfQIoKQA6XEg1LrWpDLrhwsRmFsNv1s4kNI0hU2gwDvx+UaJ7 e66TBa94uHLHoQoYTifx4bLKuq7PWbSMDhBO6FEtvo9EW+dDrgzRiqJeXoWeI+OXO5Pt X6AzIMFOsgXhWqFf5WKNZUp4Un+DKgAv5vFLhquLsl9cvCkLMAbk7YjQl0f7ktSqxFEm P+EsOeRmby0Q7KhjcaIfYrqa2Z0mBZWfOPfXiNOH4xFwYPbneEO2hp94H0NQuxQgkeZ/ neWsbGexljC2Zt06VTDgW4sqauDo65jHehXvc8J+zrdDUnabI5lVxDC3Oj2blgYO4rpo OOog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=Jc0hnvG5husazj7ipIPTOEELB2619EDCrYNfEMW83oc=; b=jlXpc8zwcOkGShNYGI2A47ij4TvrQ8hpXWsI2OA/nQJXJN9omcxyFZa+DWqvwWCcd7 QItjsbdVmJ4dRod7NAvl+FURp5s/igvaL20zhRGc6PGu6+7mbkd05sgZH/bmtozBUuJb 0pfRq2TMzH7gsWEXsNY2/55FsGsmGs1rlPGdZRVZ31ir0+gXR+FrUF08mew2BcMbBeNV UllND8yW1epekpR35DfIHmamu5PM89yBy1jSQw8kMU2px/VzDJ29I2xwDbXyxWJuQnxR iOo1VNYOT7BCJEqhJysrd4953RNs1gwVoznxpKIkTUhcYi2Kv4Z0lFkJ9+s2KtXYvPB3 bzVg== MIME-Version: 1.0 X-Gm-Message-State: AOAM530ptLbV02CrkQjBH97j0a6Hc1jG+T+6bs2eBQZ3pqLmLHNvT1BU TtwTUSy3Pal47XvW/gsxRDN0Pn0pyYs5svJMasu6ZlVfRaIXuZlVVjDnEYfdFnTglooGRSGvnNw kPRkuXqPaOlAutzHB X-Google-Smtp-Source: ABdhPJz6BogWYXnnlD3NJlGqngRVHCwWN5OJjFMGwmfb7XPFi1/AgKt5Lf0SIYF6KGa6Z3YaAy+VVQ== X-Received: by 2002:a63:215c:: with SMTP id s28mr9013616pgm.99.1629296007126; Wed, 18 Aug 2021 07:13:27 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.13.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:13:26 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 28/30] IMG DEC V4L2 Interface function implementations Date: Wed, 18 Aug 2021 19:40:35 +0530 Message-Id: <20210818141037.19990-29-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya Add Low-level VXD interface component and v4l2 wrapper Interface for VXD driver. Signed-off-by: Sunita Nadampalli Signed-off-by: Sidraya Reported-by: kernel test robot --- MAINTAINERS | 2 + drivers/staging/media/vxd/common/vid_buf.h | 42 + drivers/staging/media/vxd/decoder/vxd_v4l2.c | 2129 ++++++++++++++++++ 3 files changed, 2173 insertions(+) create mode 100644 drivers/staging/media/vxd/common/vid_buf.h create mode 100644 drivers/staging/media/vxd/decoder/vxd_v4l2.c diff --git a/MAINTAINERS b/MAINTAINERS index 0616ab620135..c7b4c860f8a7 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19566,6 +19566,7 @@ F: drivers/staging/media/vxd/common/rman_api.c F: drivers/staging/media/vxd/common/rman_api.h F: drivers/staging/media/vxd/common/talmmu_api.c F: drivers/staging/media/vxd/common/talmmu_api.h +F: drivers/staging/media/vxd/common/vid_buf.h F: drivers/staging/media/vxd/common/work_queue.c F: drivers/staging/media/vxd/common/work_queue.h F: drivers/staging/media/vxd/decoder/Kconfig @@ -19641,6 +19642,7 @@ F; drivers/staging/media/vxd/decoder/vxd_props.h F: drivers/staging/media/vxd/decoder/vxd_pvdec.c F: drivers/staging/media/vxd/decoder/vxd_pvdec_priv.h F: drivers/staging/media/vxd/decoder/vxd_pvdec_regs.h +F: drivers/staging/media/vxd/decoder/vxd_v4l2.c VIDEO I2C POLLING DRIVER M: Matt Ranostay diff --git a/drivers/staging/media/vxd/common/vid_buf.h b/drivers/staging/media/vxd/common/vid_buf.h new file mode 100644 index 000000000000..ac0e4f9b4894 --- /dev/null +++ b/drivers/staging/media/vxd/common/vid_buf.h @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Low-level VXD interface component + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Angela Stegmaier + * + * Re-written for upstream + * Sidraya Jayagond + */ + +#ifndef _VID_BUF_H +#define _VID_BUF_H + +/* + * struct vidio_ddbufinfo - contains information about virtual address + * @buf_size: the size of the buffer (in bytes). + * @cpu_virt: the cpu virtual address (mapped into the local cpu mmu) + * @dev_virt: device virtual address (pages mapped into IMG H/W mmu) + * @hndl_memory: handle to device mmu mapping + * @buff_id: buffer id used in communication with interface + * @is_internal: true, if the buffer is allocated internally + * @ref_count: reference count (number of users) + * @kmstr_id: stream id + * @core_id: core id + */ +struct vidio_ddbufinfo { + unsigned int buf_size; + void *cpu_virt; + unsigned int dev_virt; + void *hndl_memory; + unsigned int buff_id; + unsigned int is_internal; + unsigned int ref_count; + unsigned int kmstr_id; + unsigned int core_id; +}; + +#endif /* _VID_BUF_H */ diff --git a/drivers/staging/media/vxd/decoder/vxd_v4l2.c b/drivers/staging/media/vxd/decoder/vxd_v4l2.c new file mode 100644 index 000000000000..292e2de0c5e0 --- /dev/null +++ b/drivers/staging/media/vxd/decoder/vxd_v4l2.c @@ -0,0 +1,2129 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * IMG DEC V4L2 Interface function implementations + * + * Copyright (c) Imagination Technologies Ltd. + * Copyright (c) 2021 Texas Instruments Incorporated - http://www.ti.com/ + * + * Authors: + * Angela Stegmaier + * + * Re-written for upstreming + * Prashanth Kumar Amai + * Sidraya Jayagond + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef ERROR_RECOVERY_SIMULATION +#include +#include +#include +#endif + +#include +#include +#include +#include +#include +#include +#include +#include +#ifdef CAPTURE_CONTIG_ALLOC +#include +#endif + +#include "core.h" +#include "h264fw_data.h" +#include "hevcfw_data.h" +#include "img_dec_common.h" +#include "vxd_pvdec_priv.h" +#include "vxd_dec.h" + +#define VXD_DEC_SPIN_LOCK_NAME "vxd-dec" +#define IMG_VXD_DEC_MODULE_NAME "vxd-dec" + +#ifdef ERROR_RECOVERY_SIMULATION +/* This code should be execute only in debug flag */ +/* + * vxd decoder kernel object to create sysfs to debug error recovery and firmware + * watchdog timer. This kernel object will create a directory under /sys/kernel, + * containing two files fw_error_value and disable_fw_irq. + */ +struct kobject *vxd_dec_kobject; + +/* fw_error_value is the variable used to handle fw_error_attr */ +int fw_error_value = VDEC_ERROR_MAX; + +/* irq for the module, stored globally so can be accessed from sysfs */ +int g_module_irq; + +/* + * fw_error_attr. Application can set the value of this attribute, based on the + * firmware error that needs to be reproduced. + */ +struct kobj_attribute fw_error_attr = + __ATTR(fw_error_value, 0660, vxd_sysfs_show, vxd_sysfs_store); + +/* disable_fw_irq_value is variable to handle disable_fw_irq_attr */ +int disable_fw_irq_value; + +/* + * disable_fw_irq_attr. Application can set the value of this attribute. 1 to + * disable irq. 0 to enable irq. + */ +struct kobj_attribute disable_fw_irq_attr = + __ATTR(disable_fw_irq_value, 0660, vxd_sysfs_show, vxd_sysfs_store); + +/* + * Group attribute so that we can create and destroy all of them at once. + */ +struct attribute *attrs[] = { + &fw_error_attr.attr, + &disable_fw_irq_attr.attr, + NULL, /* Terminate list of attributes with NULL */ +}; + +/* + * An unnamed attribute group will put all of the attributes directly in + * the kobject directory. If we specify a name, a sub directory will be + * created for the attributes with the directory being the name of the + * attribute group + */ +struct attribute_group attr_group = { + .attrs = attrs, +}; + +#endif + +static struct heap_config vxd_dec_heap_configs[] = { + { + .type = MEM_HEAP_TYPE_UNIFIED, + .options.unified = { + .gfp_type = __GFP_DMA32 | __GFP_ZERO, + }, + .to_dev_addr = NULL, + }, +}; + +static struct vxd_dec_fmt vxd_dec_formats[] = { + { + .fourcc = V4L2_PIX_FMT_NV12, + .num_planes = 1, + .type = IMG_DEC_FMT_TYPE_CAPTURE, + .std = VDEC_STD_UNDEFINED, + .pixfmt = IMG_PIXFMT_420PL12YUV8, + .interleave = PIXEL_UV_ORDER, + .idc = PIXEL_FORMAT_420, + .size_num = 3, + .size_den = 2, + .bytes_pp = 1, + }, + { + .fourcc = V4L2_PIX_FMT_NV16, + .num_planes = 1, + .type = IMG_DEC_FMT_TYPE_CAPTURE, + .std = VDEC_STD_UNDEFINED, + .pixfmt = IMG_PIXFMT_422PL12YUV8, + .interleave = PIXEL_UV_ORDER, + .idc = PIXEL_FORMAT_422, + .size_num = 2, + .size_den = 1, + .bytes_pp = 1, + }, + { + .fourcc = V4L2_PIX_FMT_TI1210, + .num_planes = 1, + .type = IMG_DEC_FMT_TYPE_CAPTURE, + .std = VDEC_STD_UNDEFINED, + .pixfmt = IMG_PIXFMT_420PL12YUV10_MSB, + .interleave = PIXEL_UV_ORDER, + .idc = PIXEL_FORMAT_420, + .size_num = 3, + .size_den = 2, + .bytes_pp = 2, + }, + { + .fourcc = V4L2_PIX_FMT_TI1610, + .num_planes = 1, + .type = IMG_DEC_FMT_TYPE_CAPTURE, + .std = VDEC_STD_UNDEFINED, + .pixfmt = IMG_PIXFMT_422PL12YUV10_MSB, + .interleave = PIXEL_UV_ORDER, + .idc = PIXEL_FORMAT_422, + .size_num = 2, + .size_den = 1, + .bytes_pp = 2, + }, + { + .fourcc = V4L2_PIX_FMT_H264, + .num_planes = 1, + .type = IMG_DEC_FMT_TYPE_OUTPUT, + .std = VDEC_STD_H264, + .pixfmt = IMG_PIXFMT_UNDEFINED, + .interleave = PIXEL_INVALID_CI, + .idc = PIXEL_FORMAT_INVALID, + .size_num = 1, + .size_den = 1, + .bytes_pp = 1, + }, + { + .fourcc = V4L2_PIX_FMT_HEVC, + .num_planes = 1, + .type = IMG_DEC_FMT_TYPE_OUTPUT, + .std = VDEC_STD_HEVC, + .pixfmt = IMG_PIXFMT_UNDEFINED, + .interleave = PIXEL_INVALID_CI, + .idc = PIXEL_FORMAT_INVALID, + .size_num = 1, + .size_den = 1, + .bytes_pp = 1, + }, + { + .fourcc = V4L2_PIX_FMT_MJPEG, + .num_planes = 1, + .type = IMG_DEC_FMT_TYPE_OUTPUT, + .std = VDEC_STD_JPEG, + .pixfmt = IMG_PIXFMT_UNDEFINED, + .interleave = PIXEL_INVALID_CI, + .idc = PIXEL_FORMAT_INVALID, + .size_num = 1, + .size_den = 1, + .bytes_pp = 1, + }, + { + .fourcc = V4L2_PIX_FMT_YUV420M, + .num_planes = 3, + .type = IMG_DEC_FMT_TYPE_CAPTURE, + .std = VDEC_STD_UNDEFINED, + .pixfmt = 86031, + .interleave = PIXEL_UV_ORDER, + .idc = PIXEL_FORMAT_420, + .size_num = 2, + .size_den = 1, + .bytes_pp = 1, + }, + { + .fourcc = V4L2_PIX_FMT_YUV422M, + .num_planes = 3, + .type = IMG_DEC_FMT_TYPE_CAPTURE, + .std = VDEC_STD_UNDEFINED, + .pixfmt = 81935, + .interleave = PIXEL_UV_ORDER, + .idc = PIXEL_FORMAT_422, + .size_num = 3, + .size_den = 1, + .bytes_pp = 1, + }, +}; + +#ifdef ERROR_RECOVERY_SIMULATION +ssize_t vxd_sysfs_show(struct kobject *vxd_dec_kobject, + struct kobj_attribute *attr, char *buf) + +{ + int var = 0; + + if (strcmp(attr->attr.name, "fw_error_value") == 0) + var = fw_error_value; + + else + var = disable_fw_irq_value; + + return sprintf(buf, "%d\n", var); +} + +ssize_t vxd_sysfs_store(struct kobject *vxd_dec_kobject, + struct kobj_attribute *attr, + const char *buf, unsigned long count) +{ + int var = 0, rv = 0; + + rv = sscanf(buf, "%du", &var); + + if (strcmp(attr->attr.name, "fw_error_value") == 0) { + fw_error_value = var; + } else { + disable_fw_irq_value = var; + /* + * if disable_fw_irq_value is not zero, disable the irq to reproduce + * firmware non responsiveness in vxd_worker. + */ + if (disable_fw_irq_value != 0) { + /* just ignore the irq */ + disable_irq(g_module_irq); + } + } + return sprintf((char *)buf, "%d\n", var); +} +#endif + +static struct vxd_dec_ctx *file2ctx(struct file *file) +{ + return container_of(file->private_data, struct vxd_dec_ctx, fh); +} + +static irqreturn_t soft_thread_irq(int irq, void *dev_id) +{ + struct platform_device *pdev = (struct platform_device *)dev_id; + + if (!pdev) + return IRQ_NONE; + + return vxd_handle_thread_irq(&pdev->dev); +} + +static irqreturn_t hard_isrcb(int irq, void *dev_id) +{ + struct platform_device *pdev = (struct platform_device *)dev_id; + + if (!pdev) + return IRQ_NONE; + + return vxd_handle_irq(&pdev->dev); +} + +static struct vxd_buffer *find_buffer(unsigned int buf_map_id, struct list_head *head) +{ + struct list_head *list; + struct vxd_buffer *buf = NULL; + + list_for_each(list, head) { + buf = list_entry(list, struct vxd_buffer, list); + if (buf->buf_map_id == buf_map_id) + break; + buf = NULL; + } + return buf; +} + +static void return_worker(void *work) +{ + struct vxd_dec_ctx *ctx; + struct vxd_return *res; + struct device *dev; + struct timespec64 time; + int loop; + + work = get_work_buff(work, TRUE); + + res = container_of(work, struct vxd_return, work); + ctx = res->ctx; + dev = ctx->dev->dev; + switch (res->type) { + case VXD_CB_PICT_DECODED: + v4l2_m2m_job_finish(ctx->dev->m2m_dev, ctx->fh.m2m_ctx); + ktime_get_real_ts64(&time); + for (loop = 0; loop < ARRAY_SIZE(ctx->dev->time_drv); loop++) { + if (ctx->dev->time_drv[loop].id == res->buf_map_id) { + ctx->dev->time_drv[loop].end_time = + timespec64_to_ns(&time); +#ifdef DEBUG_DECODER_DRIVER + dev_info(dev, "picture buf decode time is %llu us for buf_map_id 0x%x\n", + div_s64(ctx->dev->time_drv[loop].end_time - + ctx->dev->time_drv[loop].start_time, 1000), + res->buf_map_id); +#endif + break; + } + } + + if (loop == ARRAY_SIZE(ctx->dev->time_drv)) + dev_err(dev, "picture buf decode for buf_map_id x%0x is not measured\n", + res->buf_map_id); + break; + + default: + break; + } + kfree(res->work); + kfree(res); +} + +static void vxd_error_recovery(struct vxd_dec_ctx *ctx) +{ + int ret = -1; + + /* + * In the previous frame decoding fatal error has been detected + * so we need to reload the firmware to make it alive. + */ + pr_debug("Reloading the firmware because of previous error\n"); + vxd_clean_fw_resources(ctx->dev); + ret = vxd_prepare_fw(ctx->dev); + if (ret) + pr_err("Reloading the firmware failed!!"); +} + +static struct vxd_dec_q_data *get_q_data(struct vxd_dec_ctx *ctx, + enum v4l2_buf_type type) +{ + switch (type) { + case V4L2_BUF_TYPE_VIDEO_OUTPUT: + case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE: + return &ctx->q_data[Q_DATA_SRC]; + case V4L2_BUF_TYPE_VIDEO_CAPTURE: + case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE: + return &ctx->q_data[Q_DATA_DST]; + default: + return NULL; + } + return NULL; +} + +static void vxd_return_resource(void *ctx_handle, enum vxd_cb_type type, + unsigned int buf_map_id) +{ + struct vxd_return *res; + struct vxd_buffer *buf = NULL; + struct vb2_v4l2_buffer *vb; + struct vxd_dec_ctx *ctx = (struct vxd_dec_ctx *)ctx_handle; + struct v4l2_event event = {}; + struct device *dev = ctx->dev->dev; + int i; + struct vxd_dec_q_data *q_data; + + if (ctx->aborting) { + v4l2_m2m_job_finish(ctx->dev->m2m_dev, ctx->fh.m2m_ctx); + ctx->aborting = 0; + return; + } + + switch (type) { + case VXD_CB_STRUNIT_PROCESSED: + + buf = find_buffer(buf_map_id, &ctx->out_buffers); + if (!buf) { + dev_err(dev, "Could not locate buf_map_id=0x%x in OUTPUT buffers list\n", + buf_map_id); + break; + } + buf->buffer.vb.field = V4L2_FIELD_NONE; + q_data = get_q_data(ctx, buf->buffer.vb.vb2_buf.vb2_queue->type); + if (!q_data) + return; + + for (i = 0; i < q_data->fmt->num_planes; i++) + vb2_set_plane_payload(&buf->buffer.vb.vb2_buf, i, + ctx->pict_bufcfg.plane_size[i]); + + v4l2_m2m_buf_done(&buf->buffer.vb, VB2_BUF_STATE_DONE); + break; + case VXD_CB_SPS_RELEASE: + break; + case VXD_CB_PPS_RELEASE: + break; + case VXD_CB_PICT_DECODED: + res = kzalloc(sizeof(*res), GFP_KERNEL); + if (!res) + return; + res->ctx = ctx; + res->type = type; + res->buf_map_id = buf_map_id; + + init_work(&res->work, return_worker, HWA_DECODER); + if (!res->work) + return; + + schedule_work(res->work); + + break; + case VXD_CB_PICT_DISPLAY: + buf = find_buffer(buf_map_id, &ctx->cap_buffers); + if (!buf) { + dev_err(dev, "Could not locate buf_map_id=0x%x in CAPTURE buffers list\n", + buf_map_id); + break; + } + buf->reuse = FALSE; + buf->buffer.vb.field = V4L2_FIELD_NONE; + q_data = get_q_data(ctx, buf->buffer.vb.vb2_buf.vb2_queue->type); + if (!q_data) + return; + + for (i = 0; i < q_data->fmt->num_planes; i++) + vb2_set_plane_payload(&buf->buffer.vb.vb2_buf, i, + ctx->pict_bufcfg.plane_size[i]); + + v4l2_m2m_buf_done(&buf->buffer.vb, VB2_BUF_STATE_DONE); + break; + case VXD_CB_PICT_RELEASE: + buf = find_buffer(buf_map_id, &ctx->reuse_queue); + if (buf) { + buf->reuse = TRUE; + list_move_tail(&buf->list, &ctx->cap_buffers); + + v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, &buf->buffer.vb); + break; + } + buf = find_buffer(buf_map_id, &ctx->cap_buffers); + if (!buf) { + dev_err(dev, "Could not locate buf_map_id=0x%x in CAPTURE buffers list\n", + buf_map_id); + + break; + } + buf->reuse = TRUE; + + break; + case VXD_CB_PICT_END: + break; + case VXD_CB_STR_END: + event.type = V4L2_EVENT_EOS; + v4l2_event_queue_fh(&ctx->fh, &event); + if (v4l2_m2m_num_dst_bufs_ready(ctx->fh.m2m_ctx) > 0) { + vb = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + vb->flags |= V4L2_BUF_FLAG_LAST; + + q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE); + if (!q_data) + break; + + for (i = 0; i < q_data->fmt->num_planes; i++) + vb2_set_plane_payload(&vb->vb2_buf, i, 0); + + v4l2_m2m_buf_done(vb, VB2_BUF_STATE_DONE); + } else { + ctx->flag_last = TRUE; + } + break; + case VXD_CB_ERROR_FATAL: + /* + * There has been FW error, so we need to reload the firmware. + */ + vxd_error_recovery(ctx); + + /* + * Just send zero size buffer to v4l2 application, + * informing the error condition. + */ + if (v4l2_m2m_num_dst_bufs_ready(ctx->fh.m2m_ctx) > 0) { + vb = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + + q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE); + if (!q_data) + break; + + for (i = 0; i < q_data->fmt->num_planes; i++) + vb2_set_plane_payload(&vb->vb2_buf, i, 0); + + v4l2_m2m_buf_done(vb, VB2_BUF_STATE_DONE); + } else { + ctx->flag_last = TRUE; + } + break; + default: + break; + } +} + +static int vxd_dec_submit_opconfig(struct vxd_dec_ctx *ctx) +{ + int ret = 0; + + if (ctx->stream_created) { + ret = core_stream_set_output_config(ctx->res_str_id, + &ctx->str_opcfg, + &ctx->pict_bufcfg); + if (ret) { + dev_err(ctx->dev->dev, "core_stream_set_output_config failed\n"); + ctx->opconfig_pending = TRUE; + return ret; + } + ctx->opconfig_pending = FALSE; + ctx->stream_configured = TRUE; + } else { + ctx->opconfig_pending = TRUE; + } + return ret; +} + +static int vxd_dec_queue_setup(struct vb2_queue *vq, + unsigned int *nbuffers, + unsigned int *nplanes, + unsigned int sizes[], + struct device *alloc_devs[]) +{ + struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vq); + struct vxd_dec_q_data *q_data; + struct vxd_dec_q_data *src_q_data; + int i; + unsigned int hw_nbuffers = 0; + + q_data = get_q_data(ctx, vq->type); + if (!q_data) + return -EINVAL; + + if (*nplanes) { + /* This is being called from CREATEBUFS, perform validation */ + if (*nplanes != q_data->fmt->num_planes) + return -EINVAL; + + for (i = 0; i < *nplanes; i++) { + if (sizes[i] != q_data->size_image[i]) + return -EINVAL; + } + + return 0; + } + + *nplanes = q_data->fmt->num_planes; + + if (!V4L2_TYPE_IS_OUTPUT(vq->type)) { + src_q_data = &ctx->q_data[Q_DATA_SRC]; + if (src_q_data) + hw_nbuffers = get_nbuffers(src_q_data->fmt->std, + q_data->width, + q_data->height, + ctx->max_num_ref_frames); + } + + *nbuffers = max(*nbuffers, hw_nbuffers); + + for (i = 0; i < *nplanes; i++) + sizes[i] = q_data->size_image[i]; + + return 0; +} + +static int vxd_dec_buf_prepare(struct vb2_buffer *vb) +{ + struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue); + struct device *dev = ctx->dev->dev; + struct vxd_dec_q_data *q_data; + void *sgt; +#ifdef CAPTURE_CONTIG_ALLOC + struct page *new_page; +#else + void *sgl; +#endif + struct sg_table *sgt_new; + void *sgl_new; + int pages; + int nents = 0; + int size = 0; + int plane, num_planes, ret = 0; + struct vxd_buffer *buf = + container_of(vb, struct vxd_buffer, buffer.vb.vb2_buf); + + q_data = get_q_data(ctx, vb->vb2_queue->type); + if (!q_data) + return -EINVAL; + + num_planes = q_data->fmt->num_planes; + + for (plane = 0; plane < num_planes; plane++) { + if (vb2_plane_size(vb, plane) < q_data->size_image[plane]) { + dev_err(dev, "data will not fit into plane (%lu < %lu)\n", + vb2_plane_size(vb, plane), + (long)q_data->size_image[plane]); + return -EINVAL; + } + } + + if (buf->mapped) + return 0; + + buf->buf_info.cpu_linear_addr = vb2_plane_vaddr(vb, 0); + buf->buf_info.buf_size = vb2_plane_size(vb, 0); + buf->buf_info.fd = -1; + sgt = vb2_dma_sg_plane_desc(vb, 0); + if (!sgt) { + dev_err(dev, "Could not get sg_table from plane 0\n"); + return -EINVAL; + } + + if (V4L2_TYPE_IS_OUTPUT(vb->type)) { + ret = core_stream_map_buf_sg(ctx->res_str_id, + VDEC_BUFTYPE_BITSTREAM, + &buf->buf_info, sgt, + &buf->buf_map_id); + if (ret) { + dev_err(dev, "OUTPUT core_stream_map_buf_sg failed\n"); + return ret; + } + + buf->bstr_info.buf_size = q_data->size_image[0]; + buf->bstr_info.cpu_virt_addr = buf->buf_info.cpu_linear_addr; + buf->bstr_info.mem_attrib = + SYS_MEMATTRIB_UNCACHED | SYS_MEMATTRIB_WRITECOMBINE | + SYS_MEMATTRIB_INPUT | SYS_MEMATTRIB_CPU_WRITE; + buf->bstr_info.bufmap_id = buf->buf_map_id; + lst_init(&buf->seq_unit.bstr_seg_list); + lst_init(&buf->pic_unit.bstr_seg_list); + lst_init(&buf->end_unit.bstr_seg_list); + + list_add_tail(&buf->list, &ctx->out_buffers); + } else { + /* Create a single sgt from the plane(s) */ + sgt_new = kmalloc(sizeof(*sgt_new), GFP_KERNEL); + if (!sgt_new) + return -EINVAL; + + for (plane = 0; plane < num_planes; plane++) { + size += ALIGN(vb2_plane_size(vb, plane), PAGE_SIZE); + sgt = vb2_dma_sg_plane_desc(vb, plane); + if (!sgt) { + dev_err(dev, "Could not get sg_table from plane %d\n", plane); + kfree(sgt_new); + return -EINVAL; + } +#ifdef CAPTURE_CONTIG_ALLOC + nents += 1; +#else + nents += sg_nents(img_mmu_get_sgl(sgt)); +#endif + } + buf->buf_info.buf_size = size; + + pages = (size + PAGE_SIZE - 1) / PAGE_SIZE; + ret = sg_alloc_table(sgt_new, nents, GFP_KERNEL); + if (ret) { + kfree(sgt_new); + return -EINVAL; + } + sgl_new = img_mmu_get_sgl(sgt_new); + + for (plane = 0; plane < num_planes; plane++) { + sgt = vb2_dma_sg_plane_desc(vb, plane); + if (!sgt) { + dev_err(dev, "Could not get sg_table from plane %d\n", plane); + sg_free_table(sgt_new); + kfree(sgt_new); + return -EINVAL; + } +#ifdef CAPTURE_CONTIG_ALLOC + new_page = phys_to_page(vb2_dma_contig_plane_dma_addr(vb, plane)); + sg_set_page(sgl_new, new_page, ALIGN(vb2_plane_size(vb, plane), + PAGE_SIZE), 0); + sgl_new = sg_next(sgl_new); +#else + sgl = img_mmu_get_sgl(sgt); + + while (sgl) { + sg_set_page(sgl_new, sg_page(sgl), img_mmu_get_sgl_length(sgl), 0); + sgl = sg_next(sgl); + sgl_new = sg_next(sgl_new); + } +#endif + } + + buf->buf_info.pictbuf_cfg = ctx->pict_bufcfg; + ret = core_stream_map_buf_sg(ctx->res_str_id, + VDEC_BUFTYPE_PICTURE, + &buf->buf_info, sgt_new, + &buf->buf_map_id); + sg_free_table(sgt_new); + kfree(sgt_new); + if (ret) { + dev_err(dev, "CAPTURE core_stream_map_buf_sg failed\n"); + return ret; + } + list_add_tail(&buf->list, &ctx->cap_buffers); + } + buf->mapped = TRUE; + buf->reuse = TRUE; + + return 0; +} + +static void vxd_dec_buf_queue(struct vb2_buffer *vb) +{ + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue); + struct vxd_buffer *buf = + container_of(vb, struct vxd_buffer, buffer.vb.vb2_buf); + struct vxd_dec_q_data *q_data; + int i; + + if (V4L2_TYPE_IS_OUTPUT(vb->type)) { + v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf); + } else { + mutex_lock_nested(ctx->mutex, SUBCLASS_VXD_V4L2); + if (buf->reuse) { + mutex_unlock(ctx->mutex); + if (ctx->flag_last) { + q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE); + vbuf->flags |= V4L2_BUF_FLAG_LAST; + + for (i = 0; i < q_data->fmt->num_planes; i++) + vb2_set_plane_payload(&vbuf->vb2_buf, i, 0); + + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE); + } else { + v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf); + } + } else { + list_move_tail(&buf->list, &ctx->reuse_queue); + mutex_unlock(ctx->mutex); + } + } +} + +static void vxd_dec_return_all_buffers(struct vxd_dec_ctx *ctx, + struct vb2_queue *q, + enum vb2_buffer_state state) +{ + struct vb2_v4l2_buffer *vb; + unsigned long flags; + + for (;;) { + if (V4L2_TYPE_IS_OUTPUT(q->type)) + vb = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); + else + vb = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + + if (!vb) + break; + + spin_lock_irqsave(ctx->dev->lock, flags); + v4l2_m2m_buf_done(vb, state); + spin_unlock_irqrestore(ctx->dev->lock, (unsigned long)flags); + } +} + +static int vxd_dec_start_streaming(struct vb2_queue *vq, unsigned int count) +{ + int ret = 0; + struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vq); + + if (V4L2_TYPE_IS_OUTPUT(vq->type)) + ctx->dst_streaming = TRUE; + else + ctx->src_streaming = TRUE; + + if (ctx->dst_streaming && ctx->src_streaming && !ctx->core_streaming) { + if (!ctx->stream_configured) { + vxd_dec_return_all_buffers(ctx, vq, VB2_BUF_STATE_ERROR); + return -EINVAL; + } + ctx->eos = FALSE; + ctx->stop_initiated = FALSE; + ctx->flag_last = FALSE; + ret = core_stream_play(ctx->res_str_id); + if (ret) { + vxd_dec_return_all_buffers(ctx, vq, VB2_BUF_STATE_ERROR); + return ret; + } + ctx->core_streaming = TRUE; + } + + return 0; +} + +static void vxd_dec_stop_streaming(struct vb2_queue *vq) +{ + struct vxd_dec_ctx *ctx = vb2_get_drv_priv(vq); + struct list_head *list; + struct list_head *temp; + struct vxd_buffer *buf = NULL; + + if (V4L2_TYPE_IS_OUTPUT(vq->type)) + ctx->dst_streaming = FALSE; + else + ctx->src_streaming = FALSE; + + if (ctx->core_streaming) { + core_stream_stop(ctx->res_str_id); + ctx->core_streaming = FALSE; + + core_stream_flush(ctx->res_str_id, TRUE); + } + + /* unmap all the output and capture plane buffers */ + if (V4L2_TYPE_IS_OUTPUT(vq->type)) { + list_for_each(list, &ctx->out_buffers) { + buf = list_entry(list, struct vxd_buffer, list); + core_stream_unmap_buf_sg(buf->buf_map_id); + buf->mapped = FALSE; + __list_del_entry(&buf->list); + } + } else { + list_for_each_safe(list, temp, &ctx->reuse_queue) { + buf = list_entry(list, struct vxd_buffer, list); + list_move_tail(&buf->list, &ctx->cap_buffers); + v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, &buf->buffer.vb); + } + + list_for_each(list, &ctx->cap_buffers) { + buf = list_entry(list, struct vxd_buffer, list); + core_stream_unmap_buf_sg(buf->buf_map_id); + buf->mapped = FALSE; + __list_del_entry(&buf->list); + } + } + + vxd_dec_return_all_buffers(ctx, vq, VB2_BUF_STATE_ERROR); +} + +static struct vb2_ops vxd_dec_video_ops = { + .queue_setup = vxd_dec_queue_setup, + .buf_prepare = vxd_dec_buf_prepare, + .buf_queue = vxd_dec_buf_queue, + .wait_prepare = vb2_ops_wait_prepare, + .wait_finish = vb2_ops_wait_finish, + .start_streaming = vxd_dec_start_streaming, + .stop_streaming = vxd_dec_stop_streaming, +}; + +static int queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_queue *dst_vq) +{ + struct vxd_dec_ctx *ctx = priv; + struct vxd_dev *vxd = ctx->dev; + int ret = 0; + + /* src_vq */ + memset(src_vq, 0, sizeof(*src_vq)); + src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE; + src_vq->io_modes = VB2_MMAP | VB2_DMABUF; + src_vq->drv_priv = ctx; + src_vq->buf_struct_size = sizeof(struct vxd_buffer); + src_vq->ops = &vxd_dec_video_ops; + src_vq->mem_ops = &vb2_dma_sg_memops; + src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + src_vq->lock = vxd->mutex; + src_vq->dev = vxd->v4l2_dev.dev; + ret = vb2_queue_init(src_vq); + if (ret) + return ret; + + /* dst_vq */ + memset(dst_vq, 0, sizeof(*dst_vq)); + dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; + dst_vq->io_modes = VB2_MMAP | VB2_DMABUF; + dst_vq->drv_priv = ctx; + dst_vq->buf_struct_size = sizeof(struct vxd_buffer); + dst_vq->ops = &vxd_dec_video_ops; +#ifdef CAPTURE_CONTIG_ALLOC + dst_vq->mem_ops = &vb2_dma_contig_memops; +#else + dst_vq->mem_ops = &vb2_dma_sg_memops; +#endif + dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + dst_vq->lock = vxd->mutex; + dst_vq->dev = vxd->v4l2_dev.dev; + ret = vb2_queue_init(dst_vq); + if (ret) { + vb2_queue_release(src_vq); + return ret; + } + + return ret; +} + +static int vxd_dec_open(struct file *file) +{ + struct vxd_dev *vxd = video_drvdata(file); + struct vxd_dec_ctx *ctx; + struct vxd_dec_q_data *s_q_data; + int i, ret = 0; + + dev_dbg(vxd->dev, "%s:%d vxd %p\n", __func__, __LINE__, vxd); + + if (vxd->no_fw) { + dev_err(vxd->dev, "Error!! fw binary is not present"); + return -1; + } + + mutex_lock_nested(vxd->mutex, SUBCLASS_BASE); + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) { + mutex_unlock(vxd->mutex); + return -ENOMEM; + } + ctx->dev = vxd; + + v4l2_fh_init(&ctx->fh, video_devdata(file)); + file->private_data = &ctx->fh; + + s_q_data = &ctx->q_data[Q_DATA_SRC]; + s_q_data->fmt = &vxd_dec_formats[0]; + s_q_data->width = 1920; + s_q_data->height = 1080; + for (i = 0; i < s_q_data->fmt->num_planes; i++) { + s_q_data->bytesperline[i] = s_q_data->width; + s_q_data->size_image[i] = s_q_data->bytesperline[i] * s_q_data->height; + } + + ctx->q_data[Q_DATA_DST] = *s_q_data; + + ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(vxd->m2m_dev, ctx, &queue_init); + if (IS_ERR_VALUE((unsigned long)ctx->fh.m2m_ctx)) { + ret = (long)(ctx->fh.m2m_ctx); + goto exit; + } + + v4l2_fh_add(&ctx->fh); + + ret = idr_alloc_cyclic(vxd->streams, &ctx->stream, VXD_MIN_STREAM_ID, VXD_MAX_STREAM_ID, + GFP_KERNEL); + if (ret < VXD_MIN_STREAM_ID || ret > VXD_MAX_STREAM_ID) { + dev_err(vxd->dev, "%s: stream id creation failed!\n", + __func__); + ret = -EFAULT; + goto exit; + } + + ctx->stream.id = ret; + ctx->stream.ctx = ctx; + + ctx->stream_created = FALSE; + ctx->stream_configured = FALSE; + ctx->src_streaming = FALSE; + ctx->dst_streaming = FALSE; + ctx->core_streaming = FALSE; + ctx->eos = FALSE; + ctx->stop_initiated = FALSE; + ctx->flag_last = FALSE; + + lst_init(&ctx->seg_list); + for (i = 0; i < MAX_SEGMENTS; i++) + lst_add(&ctx->seg_list, &ctx->bstr_segments[i]); + + if (vxd_create_ctx(vxd, ctx)) + goto out_idr_remove; + + ctx->stream.mmu_ctx = ctx->mmu_ctx; + ctx->stream.ptd = ctx->ptd; + + ctx->mutex = kzalloc(sizeof(*ctx->mutex), GFP_KERNEL); + if (!ctx->mutex) { + ret = -ENOMEM; + goto out_idr_remove; + } + mutex_init(ctx->mutex); + + INIT_LIST_HEAD(&ctx->items_done); + INIT_LIST_HEAD(&ctx->reuse_queue); + INIT_LIST_HEAD(&ctx->return_queue); + INIT_LIST_HEAD(&ctx->out_buffers); + INIT_LIST_HEAD(&ctx->cap_buffers); + + mutex_unlock(vxd->mutex); + + return 0; + +out_idr_remove: + idr_remove(vxd->streams, ctx->stream.id); + +exit: + v4l2_fh_exit(&ctx->fh); + get_work_buff(ctx->work, TRUE); + kfree(ctx->work); + kfree(ctx); + mutex_unlock(vxd->mutex); + return ret; +} + +static int vxd_dec_release(struct file *file) +{ + struct vxd_dev *vxd = video_drvdata(file); + struct vxd_dec_ctx *ctx = file2ctx(file); + struct bspp_ddbuf_array_info *fw_sequ = ctx->fw_sequ; + struct bspp_ddbuf_array_info *fw_pps = ctx->fw_pps; + int i, ret = 0; + struct vxd_dec_q_data *s_q_data; + + s_q_data = &ctx->q_data[Q_DATA_SRC]; + + if (ctx->stream_created) { + bspp_stream_destroy(ctx->bspp_context); + + for (i = 0; i < MAX_SEQUENCES; i++) { + core_stream_unmap_buf(fw_sequ[i].ddbuf_info.bufmap_id); + img_mem_free(ctx->mem_ctx, fw_sequ[i].ddbuf_info.buf_id); + } + + if (s_q_data->fmt->std != VDEC_STD_JPEG) { + for (i = 0; i < MAX_PPSS; i++) { + core_stream_unmap_buf(fw_pps[i].ddbuf_info.bufmap_id); + img_mem_free(ctx->mem_ctx, fw_pps[i].ddbuf_info.buf_id); + } + } + core_stream_destroy(ctx->res_str_id); + ctx->stream_created = FALSE; + } + + mutex_lock_nested(vxd->mutex, SUBCLASS_BASE); + + vxd_destroy_ctx(vxd, ctx); + + idr_remove(vxd->streams, ctx->stream.id); + + v4l2_fh_del(&ctx->fh); + + v4l2_fh_exit(&ctx->fh); + + v4l2_m2m_ctx_release(ctx->fh.m2m_ctx); + + mutex_destroy(ctx->mutex); + kfree(ctx->mutex); + ctx->mutex = NULL; + + get_work_buff(ctx->work, TRUE); + kfree(ctx->work); + kfree(ctx); + + mutex_unlock(vxd->mutex); + + return ret; +} + +static int vxd_dec_querycap(struct file *file, void *priv, struct v4l2_capability *cap) +{ + strncpy(cap->driver, IMG_VXD_DEC_MODULE_NAME, sizeof(cap->driver) - 1); + strncpy(cap->card, IMG_VXD_DEC_MODULE_NAME, sizeof(cap->card) - 1); + snprintf(cap->bus_info, sizeof(cap->bus_info), "platform:%s", IMG_VXD_DEC_MODULE_NAME); + cap->device_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING; + cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS; + return 0; +} + +static int __enum_fmt(struct v4l2_fmtdesc *f, unsigned int type) +{ + int i, index; + struct vxd_dec_fmt *fmt = NULL; + + index = 0; + for (i = 0; i < ARRAY_SIZE(vxd_dec_formats); ++i) { + if (vxd_dec_formats[i].type & type) { + if (index == f->index) { + fmt = &vxd_dec_formats[i]; + break; + } + index++; + } + } + + if (!fmt) + return -EINVAL; + + f->pixelformat = fmt->fourcc; + return 0; +} + +static int vxd_dec_enum_fmt(struct file *file, void *priv, struct v4l2_fmtdesc *f) +{ + if (V4L2_TYPE_IS_OUTPUT(f->type)) + return __enum_fmt(f, IMG_DEC_FMT_TYPE_OUTPUT); + + return __enum_fmt(f, IMG_DEC_FMT_TYPE_CAPTURE); +} + +static struct vxd_dec_fmt *find_format(struct v4l2_format *f, unsigned int type) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(vxd_dec_formats); ++i) { + if (vxd_dec_formats[i].fourcc == f->fmt.pix_mp.pixelformat && + vxd_dec_formats[i].type == type) + return &vxd_dec_formats[i]; + } + return NULL; +} + +static unsigned int get_sizeimage(int w, int h, struct vxd_dec_fmt *fmt, int plane) +{ + switch (fmt->fourcc) { + case V4L2_PIX_FMT_YUV420M: + return ((plane == 0) ? (w * h) : (w * h / 2)); + case V4L2_PIX_FMT_YUV422M: + return (w * h); + default: + return (w * h * fmt->size_num / fmt->size_den); + } + + return 0; +} + +static unsigned int get_stride(int w, struct vxd_dec_fmt *fmt) +{ + return (ALIGN(w, HW_ALIGN) * fmt->bytes_pp); +} + +/* + * @ Function vxd_get_header_info + * Run bspp stream submit and preparse once before device_run + * To retrieve header information + */ +static int vxd_get_header_info(void *priv) +{ + struct vxd_dec_ctx *ctx = priv; + struct vxd_dev *vxd_dev = ctx->dev; + struct device *dev = vxd_dev->v4l2_dev.dev; + struct vb2_v4l2_buffer *src_vb; + struct vxd_buffer *src_vxdb; + struct vxd_buffer *dst_vxdb; + struct bspp_preparsed_data *preparsed_data; + unsigned int data_size; + int ret; + + /* + * Checking for queued buffer. + * If no next buffer present, do not get information from header. + * Else, get header information and store for later use. + */ + src_vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx); + if (!src_vb) { + dev_warn(dev, "get_header_info Next src buffer is null\n"); + return IMG_ERROR_INVALID_PARAMETERS; + } + mutex_lock_nested(ctx->mutex, SUBCLASS_VXD_V4L2); + + src_vxdb = container_of(src_vb, struct vxd_buffer, buffer.vb); + /* Setting dst_vxdb to arbitrary value (using src_vb) for now */ + dst_vxdb = container_of(src_vb, struct vxd_buffer, buffer.vb); + + preparsed_data = &dst_vxdb->preparsed_data; + + data_size = vb2_get_plane_payload(&src_vxdb->buffer.vb.vb2_buf, 0); + + ret = bspp_stream_submit_buffer(ctx->bspp_context, + &src_vxdb->bstr_info, + src_vxdb->buf_map_id, + data_size, NULL, + VDEC_BSTRELEMENT_UNSPECIFIED); + if (ret) { + dev_err(dev, "get_header_info bspp_stream_submit_buffer failed %d\n", ret); + return ret; + } + mutex_unlock(ctx->mutex); + + ret = bspp_stream_preparse_buffers(ctx->bspp_context, NULL, 0, + &ctx->seg_list, + preparsed_data, ctx->eos); + if (ret) { + dev_err(dev, "get_header_info bspp_stream_preparse_buffers failed %d\n", ret); + return ret; + } + + if (preparsed_data->sequ_hdr_info.com_sequ_hdr_info.max_frame_size.height && + preparsed_data->sequ_hdr_info.com_sequ_hdr_info.max_ref_frame_num) { + ctx->height = preparsed_data->sequ_hdr_info.com_sequ_hdr_info.max_frame_size.height; + ctx->max_num_ref_frames = + preparsed_data->sequ_hdr_info.com_sequ_hdr_info.max_ref_frame_num; + } else { + dev_err(dev, "get_header_info preparsed data is null %d\n", ret); + return ret; + } + + return 0; +} + +static int vxd_dec_g_fmt(struct file *file, void *priv, struct v4l2_format *f) +{ + struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp; + struct vxd_dec_ctx *ctx = file2ctx(file); + struct vxd_dec_q_data *q_data; + struct vxd_dev *vxd_dev = ctx->dev; + unsigned int i = 0; + int ret = 0; + + q_data = get_q_data(ctx, f->type); + if (!q_data) + return -EINVAL; + + pix_mp->field = V4L2_FIELD_NONE; + pix_mp->pixelformat = q_data->fmt->fourcc; + pix_mp->num_planes = q_data->fmt->num_planes; + + if (f->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) { + /* The buffer contains compressed image. */ + pix_mp->width = ctx->width; + pix_mp->height = ctx->height; + pix_mp->plane_fmt[0].bytesperline = 0; + pix_mp->plane_fmt[0].sizeimage = q_data->size_image[0]; + } else if (f->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) { + /* The buffer contains decoded YUV image. */ + pix_mp->width = ctx->width; + pix_mp->height = ctx->height; + for (i = 0; i < q_data->fmt->num_planes; i++) { + pix_mp->plane_fmt[i].bytesperline = get_stride(pix_mp->width, q_data->fmt); + pix_mp->plane_fmt[i].sizeimage = get_sizeimage + (pix_mp->plane_fmt[i].bytesperline, + ctx->height, q_data->fmt, i); + } + } else { + dev_err(vxd_dev->v4l2_dev.dev, "Wrong V4L2_format type\n"); + return -EINVAL; + } + + return ret; +} + +static int vxd_dec_try_fmt(struct file *file, void *priv, struct v4l2_format *f) +{ + struct vxd_dec_ctx *ctx = file2ctx(file); + struct vxd_dev *vxd_dev = ctx->dev; + struct vxd_dec_fmt *fmt; + struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp; + struct v4l2_plane_pix_format *plane_fmt = pix_mp->plane_fmt; + unsigned int i = 0; + int ret = 0; + + if (V4L2_TYPE_IS_OUTPUT(f->type)) { + fmt = find_format(f, IMG_DEC_FMT_TYPE_OUTPUT); + if (!fmt) { + dev_err(vxd_dev->v4l2_dev.dev, "Unsupported format for source.\n"); + return -EINVAL; + } + /* + * Allocation for worst case input frame size: + * I frame with full YUV size (YUV422) + */ + plane_fmt[0].sizeimage = ALIGN(pix_mp->width, HW_ALIGN) * + ALIGN(pix_mp->height, HW_ALIGN) * 2; + } else { + fmt = find_format(f, IMG_DEC_FMT_TYPE_CAPTURE); + if (!fmt) { + dev_err(vxd_dev->v4l2_dev.dev, "Unsupported format for dest.\n"); + return -EINVAL; + } + for (i = 0; i < fmt->num_planes; i++) { + plane_fmt[i].bytesperline = get_stride(pix_mp->width, fmt); + plane_fmt[i].sizeimage = get_sizeimage(plane_fmt[i].bytesperline, + pix_mp->height, fmt, i); + } + pix_mp->num_planes = fmt->num_planes; + pix_mp->flags = 0; + } + + if (pix_mp->field == V4L2_FIELD_ANY) + pix_mp->field = V4L2_FIELD_NONE; + + return ret; +} + +static int vxd_dec_s_fmt(struct file *file, void *priv, struct v4l2_format *f) +{ + struct v4l2_pix_format_mplane *pix_mp; + struct vxd_dec_ctx *ctx = file2ctx(file); + struct vxd_dev *vxd_dev = ctx->dev; + struct device *dev = vxd_dev->v4l2_dev.dev; + struct vxd_dec_q_data *q_data; + struct vb2_queue *vq; + struct vdec_str_configdata strcfgdata; + int ret = 0; + unsigned char i = 0, j = 0; + + pix_mp = &f->fmt.pix_mp; + + if (!V4L2_TYPE_IS_OUTPUT(f->type)) { + int res = vxd_get_header_info(ctx); + + if (res == 0) + pix_mp->height = ctx->height; + } + + ret = vxd_dec_try_fmt(file, priv, f); + if (ret) + return ret; + + vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type); + if (!vq) + return -EINVAL; + + if (vb2_is_busy(vq)) { + dev_err(dev, "Queue is busy\n"); + return -EBUSY; + } + + q_data = get_q_data(ctx, f->type); + + if (!q_data) + return -EINVAL; + + /* + * saving the original dimensions to pass to gstreamer (to remove the green + * padding on kmsink) + */ + ctx->width_orig = pix_mp->width; + ctx->height_orig = pix_mp->height; + + ctx->width = pix_mp->width; + ctx->height = pix_mp->height; + + q_data->width = pix_mp->width; + q_data->height = pix_mp->height; + + if (V4L2_TYPE_IS_OUTPUT(f->type)) { + q_data->fmt = find_format(f, IMG_DEC_FMT_TYPE_OUTPUT); + q_data->size_image[0] = pix_mp->plane_fmt[0].sizeimage; + + if (!ctx->stream_created) { + strcfgdata.vid_std = q_data->fmt->std; + + if (strcfgdata.vid_std == VDEC_STD_UNDEFINED) { + dev_err(dev, "Invalid input format\n"); + return -EINVAL; + } + strcfgdata.bstr_format = VDEC_BSTRFORMAT_ELEMENTARY; + strcfgdata.user_str_id = ctx->stream.id; + strcfgdata.update_yuv = FALSE; + strcfgdata.bandwidth_efficient = FALSE; + strcfgdata.disable_mvc = FALSE; + strcfgdata.full_scan = FALSE; + strcfgdata.immediate_decode = TRUE; + strcfgdata.intra_frame_closed_gop = TRUE; + + ret = core_stream_create(ctx, &strcfgdata, &ctx->res_str_id); + if (ret) { + dev_err(dev, "Core stream create failed\n"); + return -EINVAL; + } + ctx->stream_created = TRUE; + if (ctx->opconfig_pending) { + ret = vxd_dec_submit_opconfig(ctx); + if (ret) { + dev_err(dev, "Output config failed\n"); + return -EINVAL; + } + } + + vxd_dec_alloc_bspp_resource(ctx, strcfgdata.vid_std); + ret = bspp_stream_create(&strcfgdata, + &ctx->bspp_context, + ctx->fw_sequ, + ctx->fw_pps); + if (ret) { + dev_err(dev, "BSPP stream create failed %d\n", ret); + return ret; + } + } else if (q_data->fmt != + find_format(f, IMG_DEC_FMT_TYPE_OUTPUT)) { + dev_err(dev, "Input format already set\n"); + return -EBUSY; + } + } else { + q_data->fmt = find_format(f, IMG_DEC_FMT_TYPE_CAPTURE); + for (i = 0; i < q_data->fmt->num_planes; i++) { + q_data->size_image[i] = + get_sizeimage(get_stride(pix_mp->width, q_data->fmt), + ctx->height, q_data->fmt, i); + } + + ctx->str_opcfg.pixel_info.pixfmt = q_data->fmt->pixfmt; + ctx->str_opcfg.pixel_info.chroma_interleave = q_data->fmt->interleave; + ctx->str_opcfg.pixel_info.chroma_fmt = TRUE; + ctx->str_opcfg.pixel_info.chroma_fmt_idc = q_data->fmt->idc; + + if (q_data->fmt->pixfmt == IMG_PIXFMT_420PL12YUV10_MSB || + q_data->fmt->pixfmt == IMG_PIXFMT_422PL12YUV10_MSB) { + ctx->str_opcfg.pixel_info.mem_pkg = PIXEL_BIT10_MSB_MP; + ctx->str_opcfg.pixel_info.bitdepth_y = 10; + ctx->str_opcfg.pixel_info.bitdepth_c = 10; + } else { + ctx->str_opcfg.pixel_info.mem_pkg = PIXEL_BIT8_MP; + ctx->str_opcfg.pixel_info.bitdepth_y = 8; + ctx->str_opcfg.pixel_info.bitdepth_c = 8; + } + + ctx->str_opcfg.force_oold = FALSE; + + ctx->pict_bufcfg.coded_width = pix_mp->width; + ctx->pict_bufcfg.coded_height = pix_mp->height; + ctx->pict_bufcfg.pixel_fmt = q_data->fmt->pixfmt; + for (i = 0; i < pix_mp->num_planes; i++) { + q_data->bytesperline[i] = get_stride(q_data->width, q_data->fmt); + if (q_data->bytesperline[i] < + pix_mp->plane_fmt[0].bytesperline) + q_data->bytesperline[i] = + ALIGN(pix_mp->plane_fmt[0].bytesperline, HW_ALIGN); + pix_mp->plane_fmt[0].bytesperline = + q_data->bytesperline[i]; + ctx->pict_bufcfg.stride[i] = q_data->bytesperline[i]; + } + for (j = i; j < IMG_MAX_NUM_PLANES; j++) { + if ((i - 1) < 0) + i++; + ctx->pict_bufcfg.stride[j] = + q_data->bytesperline[i - 1]; + } + ctx->pict_bufcfg.stride_alignment = HW_ALIGN; + ctx->pict_bufcfg.byte_interleave = FALSE; + for (i = 0; i < pix_mp->num_planes; i++) { + unsigned int plane_size = + get_sizeimage(ctx->pict_bufcfg.stride[i], + ctx->pict_bufcfg.coded_height, + q_data->fmt, i); + ctx->pict_bufcfg.buf_size += ALIGN(plane_size, PAGE_SIZE); + ctx->pict_bufcfg.plane_size[i] = plane_size; + pix_mp->plane_fmt[i].sizeimage = plane_size; + } + if (q_data->fmt->pixfmt == 86031 || + q_data->fmt->pixfmt == 81935) { + /* Handle the v4l2 multi-planar formats */ + ctx->str_opcfg.pixel_info.num_planes = 3; + ctx->pict_bufcfg.packed = FALSE; + for (i = 0; i < pix_mp->num_planes; i++) { + ctx->pict_bufcfg.chroma_offset[i] = + ALIGN(pix_mp->plane_fmt[i].sizeimage, PAGE_SIZE); + ctx->pict_bufcfg.chroma_offset[i] += + (i ? ctx->pict_bufcfg.chroma_offset[i - 1] : 0); + } + } else { + /* IMG Decoders support only multi-planar formats */ + ctx->str_opcfg.pixel_info.num_planes = 2; + ctx->pict_bufcfg.packed = TRUE; + ctx->pict_bufcfg.chroma_offset[0] = 0; + ctx->pict_bufcfg.chroma_offset[1] = 0; + } + + vxd_dec_submit_opconfig(ctx); + } + + return ret; +} + +static int vxd_dec_subscribe_event(struct v4l2_fh *fh, const struct v4l2_event_subscription *sub) +{ + if (sub->type != V4L2_EVENT_EOS) + return -EINVAL; + + v4l2_event_subscribe(fh, sub, 0, NULL); + return 0; +} + +static int vxd_dec_try_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd) +{ + if (cmd->cmd != V4L2_DEC_CMD_STOP) + return -EINVAL; + + return 0; +} + +static int vxd_dec_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd) +{ + struct vxd_dec_ctx *ctx = file2ctx(file); + + if (cmd->cmd != V4L2_DEC_CMD_STOP) + return -EINVAL; + +#ifdef DEBUG_DECODER_DRIVER + pr_info("%s CMD_STOP\n", __func__); +#endif + /* + * When stop command is received, notify device_run if it is + * scheduled to run, or tell the decoder that eos has + * happened. + */ + mutex_lock_nested(ctx->mutex, SUBCLASS_VXD_V4L2); + if (v4l2_m2m_num_src_bufs_ready(ctx->fh.m2m_ctx) > 0) { +#ifdef DEBUG_DECODER_DRIVER + pr_info("V4L2 src bufs not empty, set a flag to notify device_run\n"); +#endif + ctx->stop_initiated = TRUE; + mutex_unlock(ctx->mutex); + } else { + if (ctx->num_decoding) { +#ifdef DEBUG_DECODER_DRIVER + pr_info("buffers are still being decoded, so just set eos flag\n"); +#endif + ctx->eos = TRUE; + mutex_unlock(ctx->mutex); + } else { + mutex_unlock(ctx->mutex); +#ifdef DEBUG_DECODER_DRIVER + pr_info("All buffers are decoded, so issue dummy stream end\n"); +#endif + vxd_return_resource((void *)ctx, VXD_CB_STR_END, 0); + } + } + + return 0; +} + +static int vxd_g_selection(struct file *file, void *fh, struct v4l2_selection *s) +{ + struct vxd_dec_ctx *ctx = file2ctx(file); + bool def_bounds = true; + + if (s->type != V4L2_BUF_TYPE_VIDEO_CAPTURE && + s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) + return -EINVAL; + + switch (s->target) { + case V4L2_SEL_TGT_COMPOSE_DEFAULT: + case V4L2_SEL_TGT_COMPOSE_BOUNDS: + if (s->type == V4L2_BUF_TYPE_VIDEO_OUTPUT) + return -EINVAL; + break; + case V4L2_SEL_TGT_CROP_BOUNDS: + case V4L2_SEL_TGT_CROP_DEFAULT: + if (s->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) + return -EINVAL; + break; + case V4L2_SEL_TGT_COMPOSE: + if (s->type == V4L2_BUF_TYPE_VIDEO_OUTPUT) + return -EINVAL; + def_bounds = false; + break; + case V4L2_SEL_TGT_CROP: + if (s->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) + return -EINVAL; + def_bounds = false; + break; + default: + return -EINVAL; + } + + if (def_bounds) { + s->r.left = 0; + s->r.top = 0; + s->r.width = ctx->width_orig; + s->r.height = ctx->height_orig; + } + + return 0; +} + +static const struct v4l2_ioctl_ops vxd_dec_ioctl_ops = { + .vidioc_querycap = vxd_dec_querycap, + + .vidioc_enum_fmt_vid_cap = vxd_dec_enum_fmt, + .vidioc_g_fmt_vid_cap_mplane = vxd_dec_g_fmt, + .vidioc_try_fmt_vid_cap_mplane = vxd_dec_try_fmt, + .vidioc_s_fmt_vid_cap_mplane = vxd_dec_s_fmt, + + .vidioc_enum_fmt_vid_out = vxd_dec_enum_fmt, + .vidioc_g_fmt_vid_out_mplane = vxd_dec_g_fmt, + .vidioc_try_fmt_vid_out_mplane = vxd_dec_try_fmt, + .vidioc_s_fmt_vid_out_mplane = vxd_dec_s_fmt, + + .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs, + .vidioc_querybuf = v4l2_m2m_ioctl_querybuf, + .vidioc_qbuf = v4l2_m2m_ioctl_qbuf, + .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf, + .vidioc_expbuf = v4l2_m2m_ioctl_expbuf, + + .vidioc_streamon = v4l2_m2m_ioctl_streamon, + .vidioc_streamoff = v4l2_m2m_ioctl_streamoff, + .vidioc_log_status = v4l2_ctrl_log_status, + .vidioc_subscribe_event = vxd_dec_subscribe_event, + .vidioc_unsubscribe_event = v4l2_event_unsubscribe, + .vidioc_try_decoder_cmd = vxd_dec_try_cmd, + .vidioc_decoder_cmd = vxd_dec_cmd, + + .vidioc_g_selection = vxd_g_selection, +}; + +static const struct v4l2_file_operations vxd_dec_fops = { + .owner = THIS_MODULE, + .open = vxd_dec_open, + .release = vxd_dec_release, + .poll = v4l2_m2m_fop_poll, + .unlocked_ioctl = video_ioctl2, + .mmap = v4l2_m2m_fop_mmap, +}; + +static struct video_device vxd_dec_videodev = { + .name = IMG_VXD_DEC_MODULE_NAME, + .fops = &vxd_dec_fops, + .ioctl_ops = &vxd_dec_ioctl_ops, + .minor = -1, + .release = video_device_release, + .vfl_dir = VFL_DIR_M2M, +}; + +static void device_run(void *priv) +{ + struct vxd_dec_ctx *ctx = priv; + struct vxd_dev *vxd_dev = ctx->dev; + struct device *dev = vxd_dev->v4l2_dev.dev; + struct vb2_v4l2_buffer *src_vb; + struct vb2_v4l2_buffer *dst_vb; + struct vxd_buffer *src_vxdb; + struct vxd_buffer *dst_vxdb; + struct bspp_bitstr_seg *item = NULL, *next = NULL; + struct bspp_preparsed_data *preparsed_data; + unsigned int data_size; + int ret; + struct timespec64 time; + static int cnt; + int i; + + mutex_lock_nested(ctx->mutex, SUBCLASS_VXD_V4L2); + ctx->num_decoding++; + + src_vb = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); + if (!src_vb) + dev_err(dev, "Next src buffer is null\n"); + + dst_vb = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + if (!dst_vb) + dev_err(dev, "Next dst buffer is null\n"); + + src_vxdb = container_of(src_vb, struct vxd_buffer, buffer.vb); + dst_vxdb = container_of(dst_vb, struct vxd_buffer, buffer.vb); + + preparsed_data = &dst_vxdb->preparsed_data; + + data_size = vb2_get_plane_payload(&src_vxdb->buffer.vb.vb2_buf, 0); + + ret = bspp_stream_submit_buffer(ctx->bspp_context, + &src_vxdb->bstr_info, + src_vxdb->buf_map_id, + data_size, NULL, + VDEC_BSTRELEMENT_UNSPECIFIED); + if (ret) + dev_err(dev, "bspp_stream_submit_buffer failed %d\n", ret); + + if (ctx->stop_initiated && + (v4l2_m2m_num_src_bufs_ready(ctx->fh.m2m_ctx) == 0)) + ctx->eos = TRUE; + + mutex_unlock(ctx->mutex); + + ret = bspp_stream_preparse_buffers(ctx->bspp_context, NULL, 0, &ctx->seg_list, + preparsed_data, ctx->eos); + if (ret) + dev_err(dev, "bspp_stream_preparse_buffers failed %d\n", ret); + + ktime_get_real_ts64(&time); + vxd_dev->time_drv[cnt].start_time = timespec64_to_ns(&time); + vxd_dev->time_drv[cnt].id = dst_vxdb->buf_map_id; + cnt++; + + if (cnt >= ARRAY_SIZE(vxd_dev->time_drv)) + cnt = 0; + + core_stream_fill_pictbuf(dst_vxdb->buf_map_id); + + if (preparsed_data->new_sequence) { + src_vxdb->seq_unit.str_unit_type = + VDECDD_STRUNIT_SEQUENCE_START; + src_vxdb->seq_unit.str_unit_handle = ctx; + src_vxdb->seq_unit.err_flags = 0; + src_vxdb->seq_unit.dd_data = NULL; + src_vxdb->seq_unit.seq_hdr_info = + &preparsed_data->sequ_hdr_info; + src_vxdb->seq_unit.seq_hdr_id = 0; + src_vxdb->seq_unit.closed_gop = TRUE; + src_vxdb->seq_unit.eop = FALSE; + src_vxdb->seq_unit.pict_hdr_info = NULL; + src_vxdb->seq_unit.dd_pict_data = NULL; + src_vxdb->seq_unit.last_pict_in_seq = FALSE; + src_vxdb->seq_unit.str_unit_tag = NULL; + src_vxdb->seq_unit.decode = FALSE; + src_vxdb->seq_unit.features = 0; + core_stream_submit_unit(ctx->res_str_id, &src_vxdb->seq_unit); + } + + src_vxdb->pic_unit.str_unit_type = VDECDD_STRUNIT_PICTURE_START; + src_vxdb->pic_unit.str_unit_handle = ctx; + src_vxdb->pic_unit.err_flags = 0; + /* Move the processed segments to the submission buffer */ + for (i = 0; i < BSPP_MAX_PICTURES_PER_BUFFER; i++) { + item = lst_first(&preparsed_data->picture_data.pre_pict_seg_list[i]); + while (item) { + next = lst_next(item); + lst_remove(&preparsed_data->picture_data.pre_pict_seg_list[i], item); + lst_add(&src_vxdb->pic_unit.bstr_seg_list, item); + item = next; + } + /* Move the processed segments to the submission buffer */ + item = lst_first(&preparsed_data->picture_data.pict_seg_list[i]); + while (item) { + next = lst_next(item); + lst_remove(&preparsed_data->picture_data.pict_seg_list[i], item); + lst_add(&src_vxdb->pic_unit.bstr_seg_list, item); + item = next; + } + } + + src_vxdb->pic_unit.dd_data = NULL; + src_vxdb->pic_unit.seq_hdr_info = NULL; + src_vxdb->pic_unit.seq_hdr_id = 0; + if (preparsed_data->new_sequence) + src_vxdb->pic_unit.closed_gop = TRUE; + else + src_vxdb->pic_unit.closed_gop = FALSE; + src_vxdb->pic_unit.eop = TRUE; + src_vxdb->pic_unit.eos = ctx->eos; + src_vxdb->pic_unit.pict_hdr_info = + &preparsed_data->picture_data.pict_hdr_info; + src_vxdb->pic_unit.dd_pict_data = NULL; + src_vxdb->pic_unit.last_pict_in_seq = FALSE; + src_vxdb->pic_unit.str_unit_tag = NULL; + src_vxdb->pic_unit.decode = FALSE; + src_vxdb->pic_unit.features = 0; + core_stream_submit_unit(ctx->res_str_id, &src_vxdb->pic_unit); + + src_vxdb->end_unit.str_unit_type = VDECDD_STRUNIT_PICTURE_END; + src_vxdb->end_unit.str_unit_handle = ctx; + src_vxdb->end_unit.err_flags = 0; + src_vxdb->end_unit.dd_data = NULL; + src_vxdb->end_unit.seq_hdr_info = NULL; + src_vxdb->end_unit.seq_hdr_id = 0; + src_vxdb->end_unit.closed_gop = FALSE; + src_vxdb->end_unit.eop = FALSE; + src_vxdb->end_unit.eos = ctx->eos; + src_vxdb->end_unit.pict_hdr_info = NULL; + src_vxdb->end_unit.dd_pict_data = NULL; + src_vxdb->end_unit.last_pict_in_seq = FALSE; + src_vxdb->end_unit.str_unit_tag = NULL; + src_vxdb->end_unit.decode = FALSE; + src_vxdb->end_unit.features = 0; + core_stream_submit_unit(ctx->res_str_id, &src_vxdb->end_unit); +} + +static int job_ready(void *priv) +{ + struct vxd_dec_ctx *ctx = priv; + + if (v4l2_m2m_num_src_bufs_ready(ctx->fh.m2m_ctx) < 1 || + v4l2_m2m_num_dst_bufs_ready(ctx->fh.m2m_ctx) < 1 || + !ctx->core_streaming) + return 0; + + return 1; +} + +static void job_abort(void *priv) +{ + struct vxd_dec_ctx *ctx = priv; + + /* Cancel the transaction at next callback */ + ctx->aborting = 1; +} + +static const struct v4l2_m2m_ops m2m_ops = { + .device_run = device_run, + .job_ready = job_ready, + .job_abort = job_abort, +}; + +static const struct of_device_id vxd_dec_of_match[] = { + {.compatible = "img,d5500-vxd"}, + { /* end */}, +}; +MODULE_DEVICE_TABLE(of, vxd_dec_of_match); + +static int vxd_dec_probe(struct platform_device *pdev) +{ + struct vxd_dev *vxd; + struct resource *res; + const struct of_device_id *of_dev_id; + int ret; + int module_irq; + struct video_device *vfd; + + struct heap_config *heap_configs; + int num_heaps; + unsigned int i_heap_id; + /* Protect structure fields */ + spinlock_t **lock; + + of_dev_id = of_match_device(vxd_dec_of_match, &pdev->dev); + if (!of_dev_id) { + dev_err(&pdev->dev, "%s: Unable to match device\n", __func__); + return -ENODEV; + } + + dma_set_mask(&pdev->dev, DMA_BIT_MASK(40)); + + vxd = devm_kzalloc(&pdev->dev, sizeof(*vxd), GFP_KERNEL); + if (!vxd) + return -ENOMEM; + + vxd->dev = &pdev->dev; + vxd->plat_dev = pdev; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + vxd->reg_base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR_VALUE((unsigned long)vxd->reg_base)) + return (long)(vxd->reg_base); + + module_irq = platform_get_irq(pdev, 0); + if (module_irq < 0) + return -ENXIO; + vxd->module_irq = module_irq; +#ifdef ERROR_RECOVERY_SIMULATION + g_module_irq = module_irq; +#endif + + heap_configs = vxd_dec_heap_configs; + num_heaps = ARRAY_SIZE(vxd_dec_heap_configs); + + vxd->mutex = kzalloc(sizeof(*vxd->mutex), GFP_KERNEL); + if (!vxd->mutex) + return -ENOMEM; + + mutex_init(vxd->mutex); + platform_set_drvdata(pdev, vxd); + + pm_runtime_enable(&pdev->dev); + ret = pm_runtime_get_sync(&pdev->dev); + if (ret < 0) { + dev_err(&pdev->dev, "%s: failed to enable clock, status = %d\n", __func__, ret); + goto exit; + } + + /* Read HW properties */ + ret = vxd_pvdec_get_props(vxd->dev, vxd->reg_base, &vxd->props); + if (ret) { + dev_err(&pdev->dev, "%s: failed to fetch core properties!\n", __func__); + ret = -ENXIO; + goto out_put_sync; + } + vxd->mmu_config_addr_width = VXD_EXTRN_ADDR_WIDTH(vxd->props); +#ifdef DEBUG_DECODER_DRIVER + dev_info(&pdev->dev, "hw:%u.%u.%u, num_pix: %d, num_ent: %d, mmu: %d, MTX RAM: %d\n", + VXD_MAJ_REV(vxd->props), + VXD_MIN_REV(vxd->props), + VXD_MAINT_REV(vxd->props), + VXD_NUM_PIX_PIPES(vxd->props), + VXD_NUM_ENT_PIPES(vxd->props), + VXD_EXTRN_ADDR_WIDTH(vxd->props), + vxd->props.mtx_ram_size); +#endif + + INIT_LIST_HEAD(&vxd->msgs); + INIT_LIST_HEAD(&vxd->pend); + + /* initialize memory manager */ + ret = img_mem_init(&pdev->dev); + if (ret) { + dev_err(&pdev->dev, "Failed to initialize memory\n"); + ret = -ENOMEM; + goto out_put_sync; + } + vxd->streams = kzalloc(sizeof(*vxd->streams), GFP_KERNEL); + if (!vxd->streams) { + ret = -ENOMEM; + goto out_init; + } + + idr_init(vxd->streams); + + ret = vxd_init(&pdev->dev, vxd, heap_configs, num_heaps); + if (ret) { + dev_err(&pdev->dev, "%s: main component initialisation failed!\n", __func__); + goto out_idr_init; + } + + /* initialize core */ + i_heap_id = vxd_g_internal_heap_id(); + if (i_heap_id < 0) { + dev_err(&pdev->dev, "%s: Invalid internal heap id", __func__); + goto out_vxd_init; + } + ret = core_initialise(vxd, i_heap_id, vxd_return_resource); + if (ret) { + dev_err(&pdev->dev, "%s: core initialization failed!", __func__); + goto out_vxd_init; + } + + vxd->fw_refcnt = 0; + vxd->hw_on = 0; + +#ifdef DEBUG_DECODER_DRIVER + vxd->hw_pm_delay = 10000; + vxd->hw_dwr_period = 10000; +#else + vxd->hw_pm_delay = 1000; + vxd->hw_dwr_period = 1000; +#endif + ret = vxd_prepare_fw(vxd); + if (ret) { + dev_err(&pdev->dev, "%s fw acquire failed!", __func__); + goto out_core_init; + } + + if (vxd->no_fw) { + dev_err(&pdev->dev, "%s fw acquire failed!", __func__); + goto out_core_init; + } + + lock = (spinlock_t **)&vxd->lock; + *lock = kzalloc(sizeof(spinlock_t), GFP_KERNEL); + + if (!(*lock)) { + pr_err("Memory allocation failed for spin-lock\n"); + ret = ENOMEM; + goto out_core_init; + } + spin_lock_init(*lock); + + ret = v4l2_device_register(&pdev->dev, &vxd->v4l2_dev); + if (ret) + goto out_clean_fw; + +#ifdef ERROR_RECOVERY_SIMULATION + /* + * create a sysfs entry here, to debug firmware error recovery. + */ + vxd_dec_kobject = kobject_create_and_add("vxd_decoder", kernel_kobj); + if (!vxd_dec_kobject) { + dev_err(&pdev->dev, "Failed to create kernel object\n"); + goto out_clean_fw; + } + + ret = sysfs_create_group(vxd_dec_kobject, &attr_group); + if (ret) { + dev_err(&pdev->dev, "Failed to create sysfs files\n"); + kobject_put(vxd_dec_kobject); + } +#endif + + vfd = video_device_alloc(); + if (!vfd) { + dev_err(&pdev->dev, "Failed to allocate video device\n"); + ret = -ENOMEM; + goto out_v4l2_device; + } + + vxd->vfd_dec = vfd; + *vfd = vxd_dec_videodev; + vfd->v4l2_dev = &vxd->v4l2_dev; + vfd->device_caps = V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING; + vfd->lock = vxd->mutex; + + video_set_drvdata(vfd, vxd); + + snprintf(vfd->name, sizeof(vfd->name), "%s", vxd_dec_videodev.name); + ret = devm_request_threaded_irq(&pdev->dev, module_irq, (irq_handler_t)hard_isrcb, + (irq_handler_t)soft_thread_irq, IRQF_SHARED, + IMG_VXD_DEC_MODULE_NAME, pdev); + if (ret) { + dev_err(&pdev->dev, "failed to request irq\n"); + goto out_vid_dev; + } + + vxd->m2m_dev = v4l2_m2m_init(&m2m_ops); + if (IS_ERR_VALUE((unsigned long)vxd->m2m_dev)) { + dev_err(&pdev->dev, "Failed to init mem2mem device\n"); + ret = -EINVAL; + goto out_vid_dev; + } + + ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0); + if (ret) { + dev_err(&pdev->dev, "Failed to register video device\n"); + goto out_vid_reg; + } + v4l2_info(&vxd->v4l2_dev, "decoder registered as /dev/video%d\n", vfd->num); + + return 0; + +out_vid_reg: + v4l2_m2m_release(vxd->m2m_dev); + +out_vid_dev: + video_device_release(vfd); + +out_v4l2_device: + v4l2_device_unregister(&vxd->v4l2_dev); + +out_clean_fw: + vxd_clean_fw_resources(vxd); + +out_core_init: + core_deinitialise(); + +out_vxd_init: + vxd_deinit(vxd); + +out_idr_init: + idr_destroy(vxd->streams); + kfree(vxd->streams); + +out_init: + img_mem_exit(); + +out_put_sync: + pm_runtime_put_sync(&pdev->dev); + +exit: + pm_runtime_disable(&pdev->dev); + mutex_destroy(vxd->mutex); + kfree(vxd->mutex); + vxd->mutex = NULL; + + return ret; +} + +static int vxd_dec_remove(struct platform_device *pdev) +{ + struct vxd_dev *vxd = platform_get_drvdata(pdev); + + core_deinitialise(); + + vxd_clean_fw_resources(vxd); + vxd_deinit(vxd); + idr_destroy(vxd->streams); + kfree(vxd->streams); + get_delayed_work_buff(&vxd->dwork, TRUE); + kfree(&vxd->lock); + img_mem_exit(); + + pm_runtime_put_sync(&pdev->dev); + pm_runtime_disable(&pdev->dev); + kfree(vxd->dwork); + mutex_destroy(vxd->mutex); + kfree(vxd->mutex); + vxd->mutex = NULL; + + video_unregister_device(vxd->vfd_dec); + v4l2_m2m_release(vxd->m2m_dev); + v4l2_device_unregister(&vxd->v4l2_dev); + + return 0; +} + +static int __maybe_unused vxd_dec_suspend(struct device *dev) +{ + int ret = 0; + + ret = vxd_suspend_dev(dev); + if (ret) + dev_err(dev, "failed to suspend core hw!\n"); + + return ret; +} + +static int __maybe_unused vxd_dec_resume(struct device *dev) +{ + int ret = 0; + + ret = vxd_resume_dev(dev); + if (ret) + dev_err(dev, "failed to resume core hw!\n"); + + return ret; +} + +static UNIVERSAL_DEV_PM_OPS(vxd_dec_pm_ops, + vxd_dec_suspend, vxd_dec_resume, NULL); + +static struct platform_driver vxd_dec_driver = { + .probe = vxd_dec_probe, + .remove = vxd_dec_remove, + .driver = { + .name = "img_dec", + .pm = &vxd_dec_pm_ops, + .of_match_table = vxd_dec_of_match, + }, +}; +module_platform_driver(vxd_dec_driver); + +MODULE_AUTHOR("Prashanth Kumar Amai Sidraya Jayagond "); +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("IMG D5520 video decoder driver"); From patchwork Wed Aug 18 14:10:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidraya Jayagond X-Patchwork-Id: 499232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4174CC43214 for ; Wed, 18 Aug 2021 14:15:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2A76060EBD for ; Wed, 18 Aug 2021 14:15:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239770AbhHROP4 (ORCPT ); Wed, 18 Aug 2021 10:15:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239974AbhHROOt (ORCPT ); Wed, 18 Aug 2021 10:14:49 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88A68C061155 for ; Wed, 18 Aug 2021 07:13:30 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id s11so2373642pgr.11 for ; Wed, 18 Aug 2021 07:13:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pathpartnertech.com; s=google; h=mime-version:from:to:cc:subject:date:message-id:in-reply-to :references; bh=OFpHweS/JQ3nqb6EquukVDDOgXnX34E/VyawLNfBUGQ=; b=qIrjgaFUpOYCFS16Znu3m2ffDS9rT4371rnPKMR0z+4yAKGhi6HdGLD0+GQRFuBqBy 2+COOXDZSTvyO5BoWPqaW+Ozv4cwXwz/ZstzlsgSOn0KWTQVFQIlbFRvB7mkCeQqDQxO F9uO5uqZ253MtIjYsl750rSDFVipfayu+8YzHxqjzJCopmrWGdtRk9vx6YM3lrqBgSFO ISdI4RsrovhDOkVneIDiS11SOBWAilSPdUu2efruxi9kSYoVhpXd3t6TyeKorTlxpwy+ EfciVKQt53H6OIBk9Z18l3W1XWo/AZ6y1SU9iozJN0DTBzTFvYVXQi8pAcmyEz69SNi2 JRWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :in-reply-to:references; bh=OFpHweS/JQ3nqb6EquukVDDOgXnX34E/VyawLNfBUGQ=; b=bNHoo4Qnur0kksa0IeU3YwBSqmEOcqJJl1OvYVFTOaljqhMlSidzI8U+3TB/aXfg4U axK/lBkZrn8FE4D8NU48P7BxepScHzJ6u4Lf022sqNp65mmq4sMf1xuNbSF95vf5sBRJ R7rF+iBySkyvosBz5LA1rXAYy6rLKg4ipNWnOZdZFiARRP8b+j3gzK53xpqGEOhCmV8J kOOe7rFidqQDpl7Nlv3h4Q0qVDPi32usnHFawVzbhNEbRakG3tKh0QTv94Zw8oLp4KYL w/yxUK/eOmsSRWN8yzW2O16NZXWFGQ+dk0SKnv9hF5f7iZDX11mbSruEkWFU+zO2EBXI fHtg== MIME-Version: 1.0 X-Gm-Message-State: AOAM530zlbb0nkiBgX0g/U6ULBpQDN/miMmXh2yEGkQBmhAfBL0QXGm9 ot76CyqOOWtRuzunZapg0zVnuZLL5I7bCPeugQJ7dI5dmRC5OhAoT73Kw5vLl+CGzKISzF2HEEP p436+/IckNUSj3UP6 X-Google-Smtp-Source: ABdhPJzJFl0khqx5jqtp4TZDJFPqk5KxTnnqJXM3m1MLT0D7usuLVQ1pxYZhnGa0vAXluGhJBOCsow== X-Received: by 2002:a05:6a00:1583:b0:3e2:2ae3:2ba8 with SMTP id u3-20020a056a00158300b003e22ae32ba8mr9534018pfk.58.1629296010096; Wed, 18 Aug 2021 07:13:30 -0700 (PDT) Received: from localhost.localdomain ([49.207.214.181]) by smtp.gmail.com with ESMTPSA id e8sm8084343pgg.31.2021.08.18.07.13.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 07:13:29 -0700 (PDT) From: sidraya.bj@pathpartnertech.com To: gregkh@linuxfoundation.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org Cc: prashanth.ka@pathpartnertech.com, praneeth@ti.com, mchehab@kernel.org, linux-media@vger.kernel.org, praveen.ap@pathpartnertech.com, Sidraya Subject: [PATCH 29/30] arm64: dts: dra82: Add v4l2 vxd_dec device node Date: Wed, 18 Aug 2021 19:40:36 +0530 Message-Id: <20210818141037.19990-30-sidraya.bj@pathpartnertech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> References: <20210818141037.19990-1-sidraya.bj@pathpartnertech.com> Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Sidraya Enable v4l2 vxd_dec on dra82 Signed-off-by: Angela Stegmaier Signed-off-by: Sidraya --- arch/arm64/boot/dts/ti/k3-j721e-main.dtsi | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi index cf3482376c1e..a10eb7bcce74 100644 --- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi +++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi @@ -1242,6 +1242,15 @@ power-domains = <&k3_pds 193 TI_SCI_PD_EXCLUSIVE>; }; + d5520: video-decoder@4300000 { + /* IMG D5520 driver configuration */ + compatible = "img,d5500-vxd"; + reg = <0x00 0x04300000>, + <0x00 0x100000>; + power-domains = <&k3_pds 144 TI_SCI_PD_EXCLUSIVE>; + interrupts = ; + }; + ufs_wrapper: ufs-wrapper@4e80000 { compatible = "ti,j721e-ufs"; reg = <0x0 0x4e80000 0x0 0x100>;