From patchwork Mon Aug 15 18:42:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597580 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62708C2BB43 for ; Mon, 15 Aug 2022 19:28:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244491AbiHOT2W (ORCPT ); Mon, 15 Aug 2022 15:28:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344882AbiHOT1Y (ORCPT ); Mon, 15 Aug 2022 15:27:24 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6715B5B067; Mon, 15 Aug 2022 11:43:14 -0700 (PDT) Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FHM6mF028192; Mon, 15 Aug 2022 18:43:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=RyeIJarFB1x/FHKKeS30eY1z3YwgoI0uOq+LVXLj9wI=; b=DruxABf6YmYLxncAY4Xt859uhNAWoqHSccecCcGK1Hk0YIQnz3FCrysT/Xf6jRvX6JcP b+SQv3917xvo/wTpiA0hKxPuARJ97mmXwcVRA6fDwuNu/bBiWrSGFn66QizwhmJWO3sR 3Lugndknb0xgsW9nwehYozpNQThrnqwTgr4IDXqsZXsV+ssAAY0+cgmfMPtqHDXsgvz0 m+ZrFY9Z+8f4NBVNuaLqiqRPHJFK8XEMdgR6d3FvvLi4+MCXMh7FriJjpZDpXut493hx zJC1o5vIUOmP8ds6bxxKuKBBHHyRWqWHAsjkbcPhBYfTpkrug6bu5hveZrnp7SjA9kvL tw== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx1nkpcyx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:06 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIh596016588 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:05 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:04 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 01/14] drm/qaic: Add documentation for AIC100 accelerator driver Date: Mon, 15 Aug 2022 12:42:23 -0600 Message-ID: <1660588956-24027-2-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 1FwtVUwhwH-TB7pj_TySIfgEpYEd2SGo X-Proofpoint-ORIG-GUID: 1FwtVUwhwH-TB7pj_TySIfgEpYEd2SGo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 bulkscore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 suspectscore=0 clxscore=1015 malwarescore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add documentation covering both the QAIC driver, and the device that it drives. Change-Id: Iee519cc0a276249c4e8684507d27ae2c33e29aeb Signed-off-by: Jeffrey Hugo --- Documentation/gpu/drivers.rst | 1 + Documentation/gpu/qaic.rst | 567 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 568 insertions(+) create mode 100644 Documentation/gpu/qaic.rst diff --git a/Documentation/gpu/drivers.rst b/Documentation/gpu/drivers.rst index 3a52f48..433dac5 100644 --- a/Documentation/gpu/drivers.rst +++ b/Documentation/gpu/drivers.rst @@ -18,6 +18,7 @@ GPU Driver Documentation xen-front afbc komeda-kms + qaic .. only:: subproject and html diff --git a/Documentation/gpu/qaic.rst b/Documentation/gpu/qaic.rst new file mode 100644 index 0000000..3414f98 --- /dev/null +++ b/Documentation/gpu/qaic.rst @@ -0,0 +1,567 @@ +Overview +-------- +QAIC is the driver for the Qualcomm Cloud AI 100/AIC100 and SA9000P (part of +Snapdragon Ride) products. Qualcomm Cloud AI 100 is a PCIe adapter card which +contains a dedicated SoC ASIC for the purpose of efficiently running Artificial +Intelligence (AI) Deep Learning inference workloads. + +The PCIe interface of Qualcomm Cloud AI 100 is capable of Gen4 x8. An +individual SoC on a card can have up to 16 NSPs for running workloads. Each SoC +has a A53 management CPU. On card, there can be up to 32 GB of DDR + +Multiple Qualcomm Cloud AI 100 cards can be hosted in a single system to scale +overall performance. + + +Hardware Description +-------------------- +An AIC100 card consists of an AIC100 SoC, on-card DDR, and a set of misc +peripherals (PMICs, etc). + +An AIC100 card can either be a PCIe HHHL form factor (a traditional PCIe card), +or a Dual M.2 card. Both use PCIe to connect to the host system. + +As a PCIe endpoint/adapter, AIC100 uses the standard VendorID(VID)/ +ProductID(PID) combination to uniquely identify itself to the host. AIC100 +uses the standard Qualcomm VID (0x17cb). All AIC100 instances use the same +AIC100 PID (0xa100). + +AIC100 does not implement FLR (function level reset). + +AIC100 implements MSI but does not implement MSI-X. AIC100 requires 17 MSIs to +operate (1 for MHI, 16 for the DMA Bridge). + +As a PCIe device, AIC100 utilizes BARs to provide host interfaces to the device +hardware. AIC100 provides 3, 64-bit BARs. + +-The first BAR is 4K in size, and exposes the MHI interface to the host. + +-The second BAR is 2M in size, and exposes the DMA Bridge interface to the host. + +-The third BAR is variable in size based on an individual AIC100's + configuration, but defaults to 64K. This BAR currently has no purpose. + +From the host perspective, AIC100 has several key hardware components- +QSM (QAIC Service Manager) +NSPs (Neural Signal Processor) +DMA Bridge +DDR +MHI (Modem Host Interface) + +QSM - QAIC Service Manager. This is an ARM A53 CPU that runs the primary +firmware of the card and performs on-card management tasks. It also +communicates with the host (QAIC/userspace) via MHI. Each AIC100 has one of +these. + +NSP - Neural Signal Processor. Each AIC100 has up to 16 of these. These are +the processors that run the workloads on AIC100. Each NSP is a Qualcomm Hexagon +(Q6) DSP with HVX and HMX. Each NSP can only run one workload at a time, but +multiple NSPs may be assigned to a single workload. Since each NSP can only run +one workload, AIC100 is limited to 16 concurrent workloads. Workload +"scheduling" is under the purview of the host. AIC100 does not automatically +timeslice. + +DMA Bridge - The DMA Bridge is custom DMA engine that manages the flow of data +in and out of workloads. AIC100 has one of these. The DMA Bridge has 16 +channels, each consisting of a set of request/response FIFOs. Each active +workload is assigned a single DMA Bridge channel. The DMA Bridge exposes +hardware registers to manage the FIFOs (head/tail pointers), but requires host +memory to store the FIFOs. + +DDR - AIC100 has on-card DDR. In total, an AIC100 can have up to 32 GB of DDR. +This DDR is used to store workloads, data for the workloads, and is used by the +QSM for managing the device. NSPs are granted access to sections of the DDR by +the QSM. The host does not have direct access to the DDR, and must make +requests to the QSM to transfer data to the DDR. + +MHI - AIC100 has one MHI interface over PCIe. MHI itself is documented at +Documentation/mhi/index.rst MHI is the mechanism the host (QAIC/userspace) +uses to communicate with the QSM. Except for workload data via the DMA Bridge, +all interaction with the device occurs via MHI. + + +High-level Use Flow +------------------- +AIC100 is a programmable accelerator. AIC100 is typically used for running +neural networks in inferencing mode to efficiently perform AI operations. +AIC100 is not intended for training neural networks. AIC100 can be utilitized +for generic compute workloads. + +Assuming a user wants to utilize AIC100, they would follow these steps: + +1. Compile the workload into an ELF targeting the NSP(s) +2. Make requests to the QSM to load the workload and related artifacts into the + device DDR +3. Make a request to the QSM to activate the workload onto a set of idle NSPs +4. Make requests to the DMA Bridge to send input data to the workload to be + processed, and other requests to receive processed output data from the + workload. +5. Once the workload is no longer required, make a request to the QSM to + deactivate the workload, thus putting the NSPs back into an idle state. +6. Once the workload and related artifacts are no longer needed for future + sessions, make requests to the QSM to unload the data from DDR. This frees + the DDR to be used by other users. + + +Boot Flow +--------- +AIC100 uses a flashless boot flow, derived from Qualcomm MSMs. + +When AIC100 is first powered on, it begins executing PBL (Primary Bootloader) +from ROM. PBL enumerates the PCIe link, and initializes the BHI (Boot Host +Interface) component of MHI. + +Using BHI, the host points PBL to the location of the SBL (Secondary Bootloader) +image. The PBL pulls the image from the host, validates it, and begins +execution of SBL. + +SBL initializes MHI, and uses MHI to notify the host that the device has entered +the SBL stage. SBL performs a number of operations: +-SBL initializes the majority of hardware (anything PBL left uninitialized), + including DDR. +-SBL offloads the bootlog to the host. +-SBL synchonizes timestamps with the host for future logging. +-SBL uses the Sahara protocol to obtain the runtime firmware images from the + host. + +Once SBL has obtained and validated the runtime firmware, it brings the NSPs out +of reset, and jumps into the QSM. + +The QSM uses MHI to notify the host that the device has entered the QSM stage +(AMSS in MHI terms). At this point, the AIC100 device is fully functional, and +ready to process workloads. + + +MHI Channels +------------ +AIC100 defines a number of MHI channels for different purposes. This is a list +of the defined channels, and their uses. + +QAIC_LOOPBACK +Channels 0/1 +Valid for AMSS +Any data sent to the device on this channel is sent back to the host. + +QAIC_SAHARA +Channels 2/3 +Valid for SBL +Used by SBL to obtain the runtime firmware from the host. + +QAIC_DIAG +Channels 4/5 +Valid for AMSS +Used to communicate with QSM via the Diag protocol. + +QAIC_SSR +Channels 6/7 +Valid for AMSS +Used to notify the host of subsystem restart events, and to offload SSR +crashdumps. + +QAIC_QDSS +Channels 8/9 +Valid for AMSS +Used for the Qualcomm Debug Subsystem. + +QAIC_CONTROL +Channels 10/11 +Valid for AMSS +Used for the Neural Network Control (NNC) protocol. This is the primary channel +between host and QSM for managing workloads. + +QAIC_LOGGING +Channels 12/13 +Valid for SBL +Used by the SBL to send the bootlog to the host. + +QAIC_STATUS +Channels 14/15 +Valid for AMSS +Used to notify the host of Reliability, Accessability, Serviceability (RAS) +events. + +QAIC_TELEMETRY +Channels 16/17 +Valid for AMSS +Used to get/set power/thermal/etc attributes. + +QAIC_DEBUG +Channels 18/19 +Valid for AMSS +Not used. + +QAIC_TIMESYNC +Channels 20/21 +Valid for SBL/AMSS +Used to synchronize timestamps in the device side logs with the host time +source. + + +DMA Bridge +---------- +The DMA Bridge is one of the main interfaces to the host (QAIC) from the device +(the other being MHI). As part of activating a workload to run on NSPs, the QSM +assigns that network a DMA Bridge channel. A workload's DMA Bridge channel +(DBC for short) is solely for the use of that workload and is not shared with +other workloads. + +Each DBC is a pair of FIFOs that manage data in and out of the workload. One +FIFO is the request FIFO. The other FIFO is the response FIFO. + +Each DBC contains 4 registers in hardware: +-Request FIFO head pointer (offset 0x0). Read only to the host. Indicates the + latest item in the FIFO the device has consumed. +-Request FIFO tail pointer (offset 0x4). Read/write by the host. Host + increments this register to add new items to the FIFO. +-Response FIFO head pointer (offset 0x8). Read/write by the host. Indicates + the latest item in the FIFO the host has consumed. +-Response FIFO tail pointer (offset 0xc). Read only to the host. Device + increments this register to add new items to the FIFO. + +The values in each register are indexes in the FIFO. To get the location of the +FIFO element pointed to by the register - +FIFO base address + register * element size. + +Dbc registers are exposed to the host via the second BAR. Each DBC consumes +0x1000 of space in the BAR. + +The actual FIFOs are backed by host memory. When sending a request to the QSM +to activate a network, the host must donate memory to be used for the FIFOs. +Due to internal mapping limitations of the device, a single contigious chunk of +memory must be provided per DBC, which hosts both FIFOs. The request FIFO will +consume the beginning of the memory chunk, and the response FIFO will consume +the end of the memory chunk. + +A request FIFO element has the following structure: + +{ + u16 req_id; + u8 seq_id; + u8 pcie_dma_cmd; + u32 reserved; + u64 pcie_dma_source_addr; + u64 pcie_dma_dest_addr; + u32 pcie_dma_len; + u32 reserved; + u64 doorbell_addr; + u8 doorbell_attr; + u8 reserved; + u16 reserved; + u32 doorbell_data; + u32 sem_cmd0; + u32 sem_cmd1; + u32 sem_cmd2; + u32 sem_cmd3; +} + +Request field descriptions: + +req_id- request ID. A request FIFO element and a response FIFO element with the + same request ID refer to the same command. + +seq_id- sequence ID within a request. Ignored by the DMA Bridge. + +pcie_dma_cmd- describes the DMA element of this request. + Bit(7) is the force msi flag, which overrides the DMA Bridge MSI logic + and generates a MSI when this request is complete, and QSM + configures the DMA Bridge to look at this bit. + Bits(6:5) are reserved. + Bit(4) is the completion code flag, and indicates that the DMA Bridge + shall generate a response FIFO element when this request is + complete. + Bit(3) indicates if this request is a linked list transfer(0) or a bulk + transfer(1). + Bit(2) is reserved. + Bits(1:0) indicate the type of transfer. No transfer(0), to device(1), + from device(2). Value 3 is illegal. + +pcie_dma_source_addr- source address for a bulk transfer, or the address of the + linked list. + +pcie_dma_dest_addr- destination address for a bulk transfer. + +pcie_dma_len- length of the bulk transfer. Note that the size of this field + limits transfers to 4G in size. + +doorbell_addr- address of the doorbell to ring when this request is complete. + +doorbell_attr- doorbell attributes. + Bit(7) indicates if a write to a doorbell is to occur. + Bits(6:2) are reserved. + Bits(1:0) contain the encoding of the doorbell length. 0 is 32-bit, + 1 is 16-bit, 2 is 8-bit, 3 is reserved. The doorbell address + must be naturally aligned to the specified length. + +doorbell_data- data to write to the doorbell. Only the bits corresponding to + the doorbell length are valid. + +sem_cmdN- semaphore command. + Bit(31) indicates this semaphore command is enabled. + Bit(30) is the to-device DMA fence. Block this request until all + to-device DMA transfers are complete. + Bit(29) is the from-device DMA fence. Block this request until all + from-device DMA transfers are complete. + Bits(28:27) are reserved. + Bits(26:24) are the semaphore command. 0 is NOP. 1 is init with the + specified value. 2 is increment. 3 is decrement. 4 is wait + until the semaphore is equal to the specified value. 5 is wait + until the semaphore is greater or equal to the specified value. + 6 is "P", wait until semaphore is greater than 0, then + decrement by 1. 7 is reserved. + Bit(23) is reserved. + Bit(22) is the semaphore sync. 0 is post sync, which means that the + semaphore operation is done after the DMA transfer. 1 is + presync, which gates the DMA transfer. Only one presync is + allowed per request. + Bit(21) is reserved. + Bits(20:16) is the index of the semaphore to operate on. + Bits(15:12) are reserved. + Bits(11:0) are the semaphore value to use in operations. + +Overall, a request is processed in 4 steps: +1. If specified, the presync semaphore condition must be true +2. If enabled, the DMA transfer occurs +3. If specified, the postsync semaphore conditions must be true +4. If enabled, the doorbell is written + +By using the semaphores in conjunction with the workload running on the NSPs, +the data pipeline can be synchronized such that the host can queue multiple +requests of data for the workload to process, but the DMA Bridge will only copy +the data into the memory of the workload when the workload is ready to process +the next input. + +Once a request is fully processed, a response FIFO element is generated if +specified in pcie_dma_cmd. The structure of a response FIFO element: + +{ + u16 req_id; + u16 completion_code; +} + +req_id- matches the req_id of the request that generated this element. + +completion_code- status of this request. 0 is success. non-zero is an error. + +The DMA Bridge will generate a MSI to the host as a reaction to activity in the +response FIFO of a DBC. The DMA Bridge hardware has an IRQ storm mitigation +algorithm, where it will only generate a MSI when the response FIFO transitions +from empty to non-empty (unless force MSI is enabled and triggered). In +response to this MSI, the host is expected to drain the response FIFO, and must +take care to handle any race conditions between draining the FIFO, and the +device inserting elements into the FIFO. + +It is still possible for an IRQ storm to occur, if the workload is particularly +quick, and the host is responsive. If the host can drain the response FIFO as +quickly as the device can insert elements into it, then the device will +frequently transition the response FIFO from empty to non-empty and generate +MSIs at a rate equilivelent to the speed of the workload's ability to process +inputs. The lprnet (license plate reader network) workload is known to trigger +this condition, and can generate in excess of 100k MSIs per second. It has been +observed that most systems cannot tolerate this for long, and will crash due to +some form of watchdog due to the overhead of the interrupt controller +interrupting the host CPU. + +To mitigate this issue, the QAIC driver implements specific IRQ handling. When +QAIC receives an IRQ, it disables that line. This prevents the interrupt +controller from interrupting the CPU. Then AIC drains the FIFO. Once the FIFO +is drained, QAIC implements a "last chance" polling algorithm where QAIC will +sleep for a time to see if the workload will generate more activity. The IRQ +line remains disabled during this time. If no activity is detected, QAIC exits +polling mode and reenables the IRQ line. + +This mitigation in QAIC is very effective. The same lprnet usecase that +generates 100k IRQs per second (per /proc/interrupts) is reduced to roughly 64 +IRQs over 5 minutes while keeping the host system stable, and having the same +workload throughput performance (within run to run noise variation). + + +Neural Network Control (NNC) Protocol +------------------------------------- +The NNC protocol is how the host makes requests to the QSM to manage workloads. +It uses the QAIC_CONTROL MHI channel. + +Each NNC request is packaged into a message. Each message is a series of +transactions. A passthrough type transaction can contain elements known as +commands. QAIC understands the structure of a message, and all of the +transactions. QAIC does not understand commands (the payload of a passthrough +transaction). + +QSM requires NNC messages be little endian encoded and the fields be naturally +aligned. Since there are 64-bit elements in some NNC messages, 64-bit alignment +must be maintained. + +A message contains a header and then a series of transactions. A message may be +at most 4K in size from QSM to the host. From the host to the QSM, a message +can be at most 64K (maximum size of a single MHI packet), but there is a +continuation feature where message N+1 can be marked as a continuation of +message N. This is used for exceedingly large DMA xfer transactions. + +Transaction descriptions: + +passthrough- Allows userspace to send an opaque payload directly to the QSM. + This is used for NNC commands. Userspace is responsible for managing + the QSM message requirements in the payload + +dma_xfer- DMA transfer. Describes an object that the QSM should DMA into the + device via address and size tuples. QAIC ensures the data is mapped to + device accessible addresses. + +activate- Activate a workload onto NSPs. QAIC uses this transaction to assign + host memory to be used by the DBC. QAIC uses the response which + contains the assigned DBC to ensure only the requesting user is allowed + to access the assigned DBC. + +deactivate- Deactivate an active workload and return the NSPs to idle. QAIC + uses the transaction to remove access to a DBC from the requesting user. + +status- Query the QSM about it's NNC implementation. Returns the NNC version, + and if CRC is used. + +terminate- Release a user's resources. Used by QAIC to indicate to QSM that a + particular user has gone away, and all of their resources can be cleaned + up. + +dma_xfer_cont- Continuation of a previous DMA transfer. If a DMA transfer + cannot be specified in a single message (highly fragmented), this + transaction can be used to specify more ranges. + +validate_partition- Query to QSM to determine if a partition identifier is + valid. + + +Each message is tagged with a user id, and a partition id. The user id allows +QSM to track resources, and release them when the user goes away (eg the process +crashes). A partition id identifies the resource partition that QSM manages, +which this message applies to. + +Messages may have CRCs. Messages should have CRCs applied until the QSM +reports via the status transaction that CRCs are not needed. The QSM on the +SA9000P requires CRCs for black channel safing. + + +Subsystem Restart (SSR) +----------------------- +SSR is the concept of limiting the impact of an error. An AIC100 device may +have multiple users, each with their own workload running. If the workload of +one user crashes, the fallout of that should be limited to that workload and not +impact other workloads. SSR accomplishes this. + +If a particular workload crashes, QSM notifies the host via the QAIC_SSR MHI +channel. This notification identifies the workload by it's assigned DBC. A +multi-stage recovery process is then used to cleanup both sides, and get the +DBC/NSPs into a working state. At each stage, QAIC sends uevents on the DBC to +userspace so that userspace is aware of the SSR event, and the event's progress. + +When SSR occurs, any state in the workload is lost. Any inputs that were in +process, or queued by not yet serviced, are lost. The loaded artifacts will +remain in on-card DDR, but userspace will need to re-activate the workload if +it desires to recover the workload. + + +Reliability, Accessability, Serviceability (RAS) +----------------------------------------------- +AIC100 is expected to be deployed in server systems where RAS ideology is +applied. Simply put, RAS is the concept of detecting, classifying, and +reporting errors. While PCIe has AER (Advanced Error Reporting) which factors +into RAS, AER does not allow for a device to report details about internal +errors. Therefore, AIC100 implements a custom RAS mechanism. When a RAS event +occurs, QSM will report the event with appropriate details via the QAIC_STATUS +MHI channel. QAIC will receive these reports, decode them, and print the event +to the kernel log (much like AER handling). A sysadmin may determine that a +particular device needs additional service based on RAS reports. + + +Telemetry +--------- +QSM has the ability to report various physical attributes of the device, and in +some cases, to allow the host to control them. Examples include thermal limits, +thermal readings, and power readings. These items are communicated via the +QAIC_TELEMETRY MHI channel + +Many of these attributes apply to multiple components of the device. The +scheme QAIC uses is that attribute0 refers to that attribute at the board level. +Attribute1 refers to that attribute at the SoC level. Attribute2 refers to that +attribute at the DDR level. + + +Versioning +---------- +QAIC provides a module/DRM version of the scheme Major.Minor.Patch + +The Major number is incremented when a code change results in a breaking change +to the uAPI. This should never happen. + +The Minor number is incremented when a code change results in a backwards +compatible extension (new feature) to the uAPI. This is expected to be rare. + +The Patch number is incremented when a code change results in an internal change +to QAIC, such as a bug fix. This can be used to determine if the current +version of the driver contains some known update. + +An update to the Major number will reset the Minor number and Patch number. +An update to the Minor number will reset the Patch number. Examples: +1.2.3 -> 2.0.0 +1.2.3 -> 1.3.0 + +Versions: +1.0.X - initial version of the DRM driver, accepted into upstream Linux + + +QSM can report a version number of the NNC protocol it supports. This is in the +form of a Major number and a Minor number. + +Major number updates indicate changes to the NNC protocol which impact th + message format, or transactions (impacts QAIC). + +Minor number updates indicate changes to the NNC protocol which impact the +commands (does not impact QAIC). + + +uAPI +---------- +QAIC defines a number of driver specific IOCTLs as part of the userspace API. +This section describes those APIs. + +DRM_IOCTL_QAIC_MANAGE: +This IOCTL allows userspace to send a NNC request to the QSM. The call will +block until a response is received, or the request has timed out. + +DRM_IOCTL_QAIC_CREATE_BO: +This IOCTL allows userspace to allocate a buffer object (BO) which can send or +receive data from a workload. The call will return a GEM handle that +represents the allocated buffer. The BO is not usable until it has been sliced +(see DRM_IOCTL_QAIC_ATTACH_SLICE_BO). + +DRM_IOCTL_QAIC_MMAP_BO: +This IOCTL allows userspace to prepare an allocated BO to be mmap'd into the +userspace process. + +DRM_IOCTL_QAIC_ATTACH_SLICE_BO: +This IOCTL allows userspace to slice a BO in preparation for sending the BO to +the device. Slicing is the operation of describing what portions of a BO get +sent where to a workload. This requires a set of DMA transfers for the DMA +Bridge, and as such, locks the BO to a specific DBC. + +DRM_IOCTL_QAIC_EXECUTE_BO: +This IOCTL allows userspace to submit a set of sliced BOs to the device. The +call is non-blocking. Success only indicates that the BOs have been queued +to the device, but does not guarantee they have been executed. + +DRM_IOCTL_QAIC_PARTIAL_EXECUTE_BO: +This IOCTL operates like DRM_IOCTL_QAIC_EXECUTE_BO, but it allows userspace to +shrink the BOs sent to the device for this specific call. If a BO typically has +N inputs, but only a subset of those is available, this IOCTL allows userspace +to indicate that only the first M bytes of the BO should be sent to the device +to minimize data transfer overhead. This IOCTL dynamically recomputes the +slicing, and therefore has some processing overhead before the BOs can be queued +to the device. + +DRM_IOCTL_QAIC_WAIT_BO: +This IOCTL allows userspace to determine when a particular BO has been processed +by the device. The call will block until either the BO has been processed and +can be re-queued to the device, or a timeout occurs. + +DRM_IOCTL_QAIC_PERF_STATS_BO: +This IOCTL allows userspace to collect performance statistics on the most +recent execution of a BO. This allows userspace to construct an end to end +timeline of the BO processing for a performance analysis. From patchwork Mon Aug 15 18:42:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFB5DC28B2B for ; Mon, 15 Aug 2022 19:28:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244267AbiHOT2U (ORCPT ); Mon, 15 Aug 2022 15:28:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344893AbiHOT1Z (ORCPT ); Mon, 15 Aug 2022 15:27:25 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD7182F641; Mon, 15 Aug 2022 11:43:14 -0700 (PDT) Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FFhbJt016564; Mon, 15 Aug 2022 18:43:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=dSxYdoGf7RATwi7Rdud2UbXC/UO+ky4t/d1jRSOYJhE=; b=ENUaj9aKgpWeVXhTiL7NfFsTBno6+TWJFr/DYtLwzQwMCY9kXmrxEHdX7NQFkewYebMZ 6iVfbLtXUKJ6w/w0RJTv5eQBPzSjeKAyvswHDo1Y2RL/UZ+rU8PN3VBXbZSk1y9Mhel8 dyixFb7b1ymdoZbtPux69agCFMxIldbhm09OSgs/yP7FuT6oKvPrtkEDUDco60vgWS3c gd2hLNwqRTGYUOHPvPKjF0Y/JqNp9WMdYozSqTAyKs8gFZbmGg8MjZkBGXch5feevZDc DBCjw8w5hoLQN3YhoP5m3ozqeWwugtIU21pvSyJSdhQkJ5Gh0Erz2IDfIKTZzYUeVFNe Ag== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx54sx0tj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:07 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIh7tL030113 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:07 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:05 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 02/14] drm/qaic: Add uapi and core driver file Date: Mon, 15 Aug 2022 12:42:24 -0600 Message-ID: <1660588956-24027-3-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: sYAr3IvQI_bLePPFicSu4JkVdStSDpVk X-Proofpoint-ORIG-GUID: sYAr3IvQI_bLePPFicSu4JkVdStSDpVk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 priorityscore=1501 impostorscore=0 mlxlogscore=999 spamscore=0 mlxscore=0 clxscore=1015 suspectscore=0 bulkscore=0 lowpriorityscore=0 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add the QAIC driver uapi file and core driver file that binds to the PCIe device. The core driver file also creates the drm device and manages all the interconnections between the different parts of the driver. Change-Id: I28854e8a5dacda217439be2f65a4ab67d4dccd1e Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/qaic_drv.c | 825 ++++++++++++++++++++++++++++++++++++++++ include/uapi/drm/qaic_drm.h | 283 ++++++++++++++ 2 files changed, 1108 insertions(+) create mode 100644 drivers/gpu/drm/qaic/qaic_drv.c create mode 100644 include/uapi/drm/qaic_drm.h diff --git a/drivers/gpu/drm/qaic/qaic_drv.c b/drivers/gpu/drm/qaic/qaic_drv.c new file mode 100644 index 0000000..0e139e6 --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_drv.c @@ -0,0 +1,825 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2019-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mhi_controller.h" +#include "qaic.h" +#include "qaic_debugfs.h" +#include "qaic_ras.h" +#include "qaic_ssr.h" +#include "qaic_telemetry.h" +#define CREATE_TRACE_POINTS +#include "qaic_trace.h" + +MODULE_IMPORT_NS(DMA_BUF); + +#define PCI_DEV_AIC100 0xa100 +#define QAIC_NAME "qaic" +#define STR2(s) #s +#define STR(s) STR2(s) +#define MAJOR_VER 1 +#define MINOR_VER 0 +#define PATCH_VER 0 +#define QAIC_DESC "Qualcomm Cloud AI Accelerators" + +static unsigned int datapath_polling; +module_param(datapath_polling, uint, 0400); +bool poll_datapath; + +static u16 cntl_major = 5; +static u16 cntl_minor;/* 0 */ +static bool link_up; + +static int qaic_create_drm_device(struct qaic_device *qdev, s32 partition_id, + struct qaic_user *owner); +static void qaic_destroy_drm_device(struct qaic_device *qdev, s32 partition_id, + struct qaic_user *owner); + +static void free_usr(struct kref *kref) +{ + struct qaic_user *usr = container_of(kref, struct qaic_user, ref_count); + + cleanup_srcu_struct(&usr->qddev_lock); + kfree(usr); +} + +static int qaic_open(struct drm_device *dev, struct drm_file *file) +{ + struct qaic_drm_device *qddev = dev->dev_private; + struct qaic_device *qdev = qddev->qdev; + struct qaic_user *usr; + int rcu_id; + int ret; + + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return -ENODEV; + } + + usr = kmalloc(sizeof(*usr), GFP_KERNEL); + if (!usr) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return -ENOMEM; + } + + usr->handle = current->pid; + usr->qddev = qddev; + atomic_set(&usr->chunk_id, 0); + init_srcu_struct(&usr->qddev_lock); + kref_init(&usr->ref_count); + + ret = mutex_lock_interruptible(&qddev->users_mutex); + if (ret) { + cleanup_srcu_struct(&usr->qddev_lock); + kfree(usr); + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; + } + + list_add(&usr->node, &qddev->users); + mutex_unlock(&qddev->users_mutex); + + file->driver_priv = usr; + + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return 0; +} + +static void qaic_postclose(struct drm_device *dev, struct drm_file *file) +{ + struct qaic_user *usr = file->driver_priv; + struct qaic_drm_device *qddev; + struct qaic_device *qdev; + int qdev_rcu_id; + int usr_rcu_id; + int i; + + qddev = usr->qddev; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (qddev) { + qdev = qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (!qdev->in_reset) { + qaic_release_usr(qdev, usr); + for (i = 0; i < qdev->num_dbc; ++i) + if (qdev->dbc[i].usr && + qdev->dbc[i].usr->handle == usr->handle) + release_dbc(qdev, i, true); + + /* Remove child devices */ + if (qddev->partition_id == QAIC_NO_PARTITION) + qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION, usr); + } + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + + mutex_lock(&qddev->users_mutex); + if (!list_empty(&usr->node)) + list_del_init(&usr->node); + mutex_unlock(&qddev->users_mutex); + } + + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + kref_put(&usr->ref_count, free_usr); + + file->driver_priv = NULL; +} + +static int qaic_part_dev_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct qaic_device *qdev; + struct qaic_user *usr; + u32 partition_id; + int qdev_rcu_id; + int usr_rcu_id; + int ret = 0; + u16 remove; + + usr = file_priv->driver_priv; + if (!usr) + return -EINVAL; + + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return -ENODEV; + } + + qdev = usr->qddev->qdev; + if (!qdev) { + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return -ENODEV; + } + + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto out; + } + + /* This IOCTL is only supported for base devices. */ + if (usr->qddev->partition_id != QAIC_NO_PARTITION) { + ret = -ENOTTY; + goto out; + } + + ret = qaic_data_get_reservation(qdev, usr, data, &partition_id, + &remove); + if (ret) + goto out; + + if (remove == 1) + qaic_destroy_drm_device(qdev, partition_id, usr); + else + ret = qaic_create_drm_device(qdev, partition_id, usr); + +out: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + + return ret; +} + +DEFINE_DRM_GEM_FOPS(qaic_drm_fops); + +static const struct drm_ioctl_desc qaic_drm_ioctls[] = { + DRM_IOCTL_DEF_DRV(QAIC_MANAGE, qaic_manage_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(QAIC_CREATE_BO, qaic_create_bo_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(QAIC_MMAP_BO, qaic_mmap_bo_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(QAIC_ATTACH_SLICE_BO, qaic_attach_slice_bo_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(QAIC_EXECUTE_BO, qaic_execute_bo_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(QAIC_PARTIAL_EXECUTE_BO, qaic_partial_execute_bo_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(QAIC_WAIT_BO, qaic_wait_bo_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(QAIC_PERF_STATS_BO, qaic_perf_stats_bo_ioctl, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(QAIC_PART_DEV, qaic_part_dev_ioctl, DRM_RENDER_ALLOW), +}; + +static const struct drm_driver qaic_drm_driver = { + .driver_features = DRIVER_GEM | DRIVER_RENDER, + + .name = QAIC_NAME, + .desc = QAIC_DESC, + .date = "20190618", + .major = MAJOR_VER, + .minor = MINOR_VER, + .patchlevel = PATCH_VER, + + .fops = &qaic_drm_fops, + .open = qaic_open, + .postclose = qaic_postclose, + +#if defined(CONFIG_DEBUG_FS) + .debugfs_init = qaic_debugfs_init, +#endif + + .ioctls = qaic_drm_ioctls, + .num_ioctls = ARRAY_SIZE(qaic_drm_ioctls), + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, + .gem_prime_import = qaic_gem_prime_import, +}; + +static int qaic_create_drm_device(struct qaic_device *qdev, s32 partition_id, + struct qaic_user *owner) +{ + struct qaic_drm_device *qddev; + struct drm_device *ddev; + struct device *pdev; + int ret; + + /* + * Partition id QAIC_NO_PARTITION indicates that the device was created + * on mhi_probe and id > QAIC_NO_PARTITION indicates a partition + * created using IOCTL. So, pdev for primary device is the pci dev and + * the parent for partition dev is the primary device. + */ + if (partition_id == QAIC_NO_PARTITION) + pdev = &qdev->pdev->dev; + else + pdev = qdev->base_dev->ddev->dev; + + qddev = kzalloc(sizeof(*qddev), GFP_KERNEL); + if (!qddev) { + ret = -ENOMEM; + goto qddev_fail; + } + + ddev = drm_dev_alloc(&qaic_drm_driver, pdev); + if (IS_ERR(ddev)) { + ret = PTR_ERR(ddev); + goto ddev_fail; + } + + ddev->dev_private = qddev; + qddev->ddev = ddev; + + if (partition_id == QAIC_NO_PARTITION) + qdev->base_dev = qddev; + qddev->qdev = qdev; + qddev->partition_id = partition_id; + qddev->owner = owner; + INIT_LIST_HEAD(&qddev->users); + mutex_init(&qddev->users_mutex); + + mutex_lock(&qdev->qaic_drm_devices_mutex); + list_add(&qddev->node, &qdev->qaic_drm_devices); + mutex_unlock(&qdev->qaic_drm_devices_mutex); + + ret = qaic_sysfs_init(qddev); + if (ret) { + pci_dbg(qdev->pdev, "%s: sysfs_init failed %d\n", __func__, ret); + goto sysfs_init_fail; + } + + ret = drm_dev_register(ddev, 0); + if (ret) { + pci_dbg(qdev->pdev, "%s: drm_dev_register failed %d\n", __func__, ret); + goto drm_reg_fail; + } + + return 0; + +drm_reg_fail: + qaic_sysfs_remove(qddev); +sysfs_init_fail: + mutex_destroy(&qddev->users_mutex); + mutex_lock(&qdev->qaic_drm_devices_mutex); + list_del(&qddev->node); + mutex_unlock(&qdev->qaic_drm_devices_mutex); + if (partition_id == QAIC_NO_PARTITION) + qdev->base_dev = NULL; + drm_dev_put(ddev); +ddev_fail: + kfree(qddev); +qddev_fail: + return ret; +} + +static void qaic_destroy_drm_device(struct qaic_device *qdev, s32 partition_id, + struct qaic_user *owner) +{ + struct qaic_drm_device *qddev; + struct qaic_drm_device *q; + struct qaic_user *usr; + + list_for_each_entry_safe(qddev, q, &qdev->qaic_drm_devices, node) { + /* + * Skip devices in case we just want to remove devices + * specific to a owner or reservation id. + * + * owner partition_id notes + * ---------------------------------- + * NULL NO_PARTITION delete base + all derived (qdev + * reset) + * !NULL NO_PARTITION delete derived devs created by + * owner. + * !NULL >NO_PARTITION delete derived dev identified by + * the partition id and created by + * owner + * NULL >NO_PARTITION invalid (no-op) + * + * if partition_id is any value < QAIC_NO_PARTITION this will be + * a no-op. + */ + if (owner && owner != qddev->owner) + continue; + + if (partition_id != QAIC_NO_PARTITION && + partition_id != qddev->partition_id && !owner) + continue; + + /* + * Existing users get unresolvable errors till they close FDs. + * Need to sync carefully with users calling close(). The + * list of users can be modified elsewhere when the lock isn't + * held here, but the sync'ing the srcu with the mutex held + * could deadlock. Grab the mutex so that the list will be + * unmodified. The user we get will exist as long as the + * lock is held. Signal that the qcdev is going away, and + * grab a reference to the user so they don't go away for + * synchronize_srcu(). Then release the mutex to avoid + * deadlock and make sure the user has observed the signal. + * With the lock released, we cannot maintain any state of the + * user list. + */ + mutex_lock(&qddev->users_mutex); + while (!list_empty(&qddev->users)) { + usr = list_first_entry(&qddev->users, struct qaic_user, + node); + list_del_init(&usr->node); + kref_get(&usr->ref_count); + usr->qddev = NULL; + mutex_unlock(&qddev->users_mutex); + synchronize_srcu(&usr->qddev_lock); + kref_put(&usr->ref_count, free_usr); + mutex_lock(&qddev->users_mutex); + } + mutex_unlock(&qddev->users_mutex); + + if (qddev->ddev) { + qaic_sysfs_remove(qddev); + drm_dev_unregister(qddev->ddev); + drm_dev_put(qddev->ddev); + } + + list_del(&qddev->node); + kfree(qddev); + } +} + +static int qaic_mhi_probe(struct mhi_device *mhi_dev, + const struct mhi_device_id *id) +{ + struct qaic_device *qdev; + u16 major, minor; + int ret; + + /* + * Invoking this function indicates that the control channel to the + * device is available. We use that as a signal to indicate that + * the device side firmware has booted. The device side firmware + * manages the device resources, so we need to communicate with it + * via the control channel in order to utilize the device. Therefore + * we wait until this signal to create the drm dev that userspace will + * use to control the device, because without the device side firmware, + * userspace can't do anything useful. + */ + + qdev = pci_get_drvdata(to_pci_dev(mhi_dev->mhi_cntrl->cntrl_dev)); + + qdev->in_reset = false; + + dev_set_drvdata(&mhi_dev->dev, qdev); + qdev->cntl_ch = mhi_dev; + + ret = qaic_control_open(qdev); + if (ret) { + pci_dbg(qdev->pdev, "%s: control_open failed %d\n", __func__, ret); + goto err; + } + + ret = get_cntl_version(qdev, NULL, &major, &minor); + if (ret || major != cntl_major || minor > cntl_minor) { + pci_err(qdev->pdev, "%s: Control protocol version (%d.%d) not supported. Supported version is (%d.%d). Ret: %d\n", + __func__, major, minor, cntl_major, cntl_minor, ret); + ret = -EINVAL; + goto close_control; + } + + ret = qaic_create_drm_device(qdev, QAIC_NO_PARTITION, NULL); + + return ret; + +close_control: + qaic_control_close(qdev); +err: + return ret; +} + +static void qaic_mhi_remove(struct mhi_device *mhi_dev) +{ +} + +static void qaic_notify_reset(struct qaic_device *qdev) +{ + int i; + + qdev->in_reset = true; + /* wake up any waiters to avoid waiting for timeouts at sync */ + wake_all_cntl(qdev); + wake_all_telemetry(qdev); + for (i = 0; i < qdev->num_dbc; ++i) + wakeup_dbc(qdev, i); + synchronize_srcu(&qdev->dev_lock); +} + +void qaic_dev_reset_clean_local_state(struct qaic_device *qdev, bool exit_reset) +{ + int i; + + qaic_notify_reset(qdev); + + /* remove drmdevs to prevent new users from coming in */ + if (qdev->base_dev) + qaic_destroy_drm_device(qdev, QAIC_NO_PARTITION, NULL); + + /* start tearing things down */ + for (i = 0; i < qdev->num_dbc; ++i) { + release_dbc(qdev, i, false); + clean_up_ssr(qdev, i); + } + + if (exit_reset) + qdev->in_reset = false; +} + +static int qaic_pci_probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + int ret; + int i; + int mhi_irq; + struct qaic_device *qdev; + + qdev = kzalloc(sizeof(*qdev), GFP_KERNEL); + if (!qdev) { + ret = -ENOMEM; + goto qdev_fail; + } + + if (id->device == PCI_DEV_AIC100) { + qdev->num_dbc = 16; + qdev->dbc = kcalloc(qdev->num_dbc, sizeof(*qdev->dbc), + GFP_KERNEL); + if (!qdev->dbc) { + ret = -ENOMEM; + goto device_id_fail; + } + } else { + pci_dbg(pdev, "%s: No matching device found for device id %d\n", + __func__, id->device); + ret = -EINVAL; + goto device_id_fail; + } + + qdev->cntl_wq = alloc_workqueue("qaic_cntl", WQ_UNBOUND, 0); + if (!qdev->cntl_wq) { + ret = -ENOMEM; + goto wq_fail; + } + qdev->tele_wq = alloc_workqueue("qaic_tele", WQ_UNBOUND, 0); + if (!qdev->tele_wq) { + ret = -ENOMEM; + goto tele_wq_fail; + } + qdev->ssr_wq = alloc_workqueue("qaic_ssr", WQ_UNBOUND, 0); + if (!qdev->ssr_wq) { + ret = -ENOMEM; + goto ssr_wq_fail; + } + pci_set_drvdata(pdev, qdev); + qdev->pdev = pdev; + mutex_init(&qdev->cntl_mutex); + INIT_LIST_HEAD(&qdev->cntl_xfer_list); + init_srcu_struct(&qdev->dev_lock); + mutex_init(&qdev->tele_mutex); + INIT_LIST_HEAD(&qdev->tele_xfer_list); + INIT_LIST_HEAD(&qdev->bootlog); + mutex_init(&qdev->bootlog_mutex); + INIT_LIST_HEAD(&qdev->qaic_drm_devices); + mutex_init(&qdev->qaic_drm_devices_mutex); + for (i = 0; i < qdev->num_dbc; ++i) { + mutex_init(&qdev->dbc[i].handle_lock); + spin_lock_init(&qdev->dbc[i].xfer_lock); + idr_init(&qdev->dbc[i].buf_handles); + qdev->dbc[i].qdev = qdev; + qdev->dbc[i].id = i; + INIT_LIST_HEAD(&qdev->dbc[i].xfer_list); + init_srcu_struct(&qdev->dbc[i].ch_lock); + init_waitqueue_head(&qdev->dbc[i].dbc_release); + INIT_LIST_HEAD(&qdev->dbc[i].bo_lists); + } + + qdev->bars = pci_select_bars(pdev, IORESOURCE_MEM); + + /* make sure the device has the expected BARs */ + if (qdev->bars != (BIT(0) | BIT(2) | BIT(4))) { + pci_dbg(pdev, "%s: expected BARs 0, 2, and 4 not found in device. Found 0x%x\n", + __func__, qdev->bars); + ret = -EINVAL; + goto bar_fail; + } + + ret = pci_enable_device(pdev); + if (ret) + goto enable_fail; + + ret = pci_request_selected_regions(pdev, qdev->bars, "aic100"); + if (ret) + goto request_regions_fail; + + pci_set_master(pdev); + + ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); + if (ret) + goto dma_mask_fail; + ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); + if (ret) + goto dma_mask_fail; + ret = dma_set_max_seg_size(&pdev->dev, UINT_MAX); + if (ret) + goto dma_mask_fail; + + qdev->bar_0 = pci_ioremap_bar(pdev, 0); + if (!qdev->bar_0) { + ret = -ENOMEM; + goto ioremap_0_fail; + } + + qdev->bar_2 = pci_ioremap_bar(pdev, 2); + if (!qdev->bar_2) { + ret = -ENOMEM; + goto ioremap_2_fail; + } + + for (i = 0; i < qdev->num_dbc; ++i) + qdev->dbc[i].dbc_base = qdev->bar_2 + QAIC_DBC_OFF(i); + + ret = pci_alloc_irq_vectors(pdev, 1, 32, PCI_IRQ_MSI); + if (ret < 0) + goto alloc_irq_fail; + + if (ret < 32) { + pci_err(pdev, "%s: Requested 32 MSIs. Obtained %d MSIs which is less than the 32 required.\n", + __func__, ret); + ret = -ENODEV; + goto invalid_msi_config; + } + + mhi_irq = pci_irq_vector(pdev, 0); + if (mhi_irq < 0) { + ret = mhi_irq; + goto get_mhi_irq_fail; + } + + for (i = 0; i < qdev->num_dbc; ++i) { + ret = devm_request_threaded_irq(&pdev->dev, + pci_irq_vector(pdev, i + 1), + dbc_irq_handler, + dbc_irq_threaded_fn, + IRQF_SHARED, + "qaic_dbc", + &qdev->dbc[i]); + if (ret) + goto get_dbc_irq_failed; + + if (poll_datapath) { + qdev->dbc[i].irq = pci_irq_vector(pdev, i + 1); + disable_irq_nosync(qdev->dbc[i].irq); + INIT_WORK(&qdev->dbc[i].poll_work, irq_polling_work); + } + } + + qdev->mhi_cntl = qaic_mhi_register_controller(pdev, qdev->bar_0, mhi_irq); + if (IS_ERR(qdev->mhi_cntl)) { + ret = PTR_ERR(qdev->mhi_cntl); + goto mhi_register_fail; + } + + return 0; + +mhi_register_fail: +get_dbc_irq_failed: + for (i = 0; i < qdev->num_dbc; ++i) + devm_free_irq(&pdev->dev, pci_irq_vector(pdev, i + 1), + &qdev->dbc[i]); +get_mhi_irq_fail: +invalid_msi_config: + pci_free_irq_vectors(pdev); +alloc_irq_fail: + iounmap(qdev->bar_2); +ioremap_2_fail: + iounmap(qdev->bar_0); +ioremap_0_fail: +dma_mask_fail: + pci_clear_master(pdev); + pci_release_selected_regions(pdev, qdev->bars); +request_regions_fail: + pci_disable_device(pdev); +enable_fail: + pci_set_drvdata(pdev, NULL); +bar_fail: + for (i = 0; i < qdev->num_dbc; ++i) { + cleanup_srcu_struct(&qdev->dbc[i].ch_lock); + idr_destroy(&qdev->dbc[i].buf_handles); + } + cleanup_srcu_struct(&qdev->dev_lock); + destroy_workqueue(qdev->ssr_wq); +ssr_wq_fail: + destroy_workqueue(qdev->tele_wq); +tele_wq_fail: + destroy_workqueue(qdev->cntl_wq); +wq_fail: + kfree(qdev->dbc); +device_id_fail: + kfree(qdev); +qdev_fail: + return ret; +} + +static void qaic_pci_remove(struct pci_dev *pdev) +{ + struct qaic_device *qdev = pci_get_drvdata(pdev); + int i; + + if (!qdev) + return; + + qaic_dev_reset_clean_local_state(qdev, false); + qaic_mhi_free_controller(qdev->mhi_cntl, link_up); + for (i = 0; i < qdev->num_dbc; ++i) { + devm_free_irq(&pdev->dev, pci_irq_vector(pdev, i + 1), + &qdev->dbc[i]); + cleanup_srcu_struct(&qdev->dbc[i].ch_lock); + idr_destroy(&qdev->dbc[i].buf_handles); + } + destroy_workqueue(qdev->cntl_wq); + destroy_workqueue(qdev->tele_wq); + destroy_workqueue(qdev->ssr_wq); + pci_free_irq_vectors(pdev); + iounmap(qdev->bar_0); + pci_clear_master(pdev); + pci_release_selected_regions(pdev, qdev->bars); + pci_disable_device(pdev); + pci_set_drvdata(pdev, NULL); + kfree(qdev->dbc); + kfree(qdev); +} + +static void qaic_pci_shutdown(struct pci_dev *pdev) +{ + link_up = true; + qaic_pci_remove(pdev); +} + +static pci_ers_result_t qaic_pci_error_detected(struct pci_dev *pdev, + pci_channel_state_t error) +{ + return PCI_ERS_RESULT_NEED_RESET; +} + +static void qaic_pci_reset_prepare(struct pci_dev *pdev) +{ + struct qaic_device *qdev = pci_get_drvdata(pdev); + + qaic_notify_reset(qdev); + qaic_mhi_start_reset(qdev->mhi_cntl); + qaic_dev_reset_clean_local_state(qdev, false); +} + +static void qaic_pci_reset_done(struct pci_dev *pdev) +{ + struct qaic_device *qdev = pci_get_drvdata(pdev); + + qdev->in_reset = false; + qaic_mhi_reset_done(qdev->mhi_cntl); +} + +static const struct mhi_device_id qaic_mhi_match_table[] = { + { .chan = "QAIC_CONTROL", }, + {}, +}; + +static struct mhi_driver qaic_mhi_driver = { + .id_table = qaic_mhi_match_table, + .remove = qaic_mhi_remove, + .probe = qaic_mhi_probe, + .ul_xfer_cb = qaic_mhi_ul_xfer_cb, + .dl_xfer_cb = qaic_mhi_dl_xfer_cb, + .driver = { + .name = "qaic_mhi", + .owner = THIS_MODULE, + }, +}; + +static const struct pci_device_id ids[] = { + { PCI_DEVICE(PCI_VENDOR_ID_QCOM, PCI_DEV_AIC100), }, + { 0, } +}; +MODULE_DEVICE_TABLE(pci, ids); + +static const struct pci_error_handlers qaic_pci_err_handler = { + .error_detected = qaic_pci_error_detected, + .reset_prepare = qaic_pci_reset_prepare, + .reset_done = qaic_pci_reset_done, +}; + +static struct pci_driver qaic_pci_driver = { + .name = QAIC_NAME, + .id_table = ids, + .probe = qaic_pci_probe, + .remove = qaic_pci_remove, + .shutdown = qaic_pci_shutdown, + .err_handler = &qaic_pci_err_handler, +}; + +static int __init qaic_init(void) +{ + int ret; + + if (datapath_polling) { + poll_datapath = true; + pr_info("qaic: driver initializing in datapath polling mode\n"); + } + + qaic_logging_register(); + + ret = mhi_driver_register(&qaic_mhi_driver); + if (ret) { + pr_debug("qaic: mhi_driver_register failed %d\n", ret); + goto free_class; + } + + ret = pci_register_driver(&qaic_pci_driver); + + if (ret) { + pr_debug("qaic: pci_register_driver failed %d\n", ret); + goto free_mhi; + } + + qaic_telemetry_register(); + qaic_ras_register(); + qaic_ssr_register(); + goto out; + +free_mhi: + mhi_driver_unregister(&qaic_mhi_driver); +free_class: +out: + if (ret) + qaic_logging_unregister(); + + return ret; +} + +static void __exit qaic_exit(void) +{ + pr_debug("qaic: exit\n"); + link_up = true; + pci_unregister_driver(&qaic_pci_driver); + mhi_driver_unregister(&qaic_mhi_driver); + qaic_telemetry_unregister(); + qaic_ras_unregister(); + qaic_ssr_unregister(); + qaic_logging_unregister(); +} + +module_init(qaic_init); +module_exit(qaic_exit); + +MODULE_AUTHOR(QAIC_DESC " Kernel Driver Team"); +MODULE_DESCRIPTION(QAIC_DESC " DRM driver"); +MODULE_LICENSE("GPL v2"); +MODULE_VERSION(STR(MAJOR_VER) "." STR(MINOR_VER) "." STR(PATCH_VER)); diff --git a/include/uapi/drm/qaic_drm.h b/include/uapi/drm/qaic_drm.h new file mode 100644 index 0000000..5fb3981 --- /dev/null +++ b/include/uapi/drm/qaic_drm.h @@ -0,0 +1,283 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note + * + * Copyright (c) 2019-2020, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef QAIC_DRM_H_ +#define QAIC_DRM_H_ + +#include +#include +#include "drm.h" + +#if defined(__CPLUSPLUS) +extern "C" { +#endif + +#define QAIC_MANAGE_MAX_MSG_LENGTH 0x1000 /**< + * The length(4K) includes len and + * count fields of qaic_manage_msg + */ + +enum qaic_sem_flags { + SEM_INSYNCFENCE = 0x1, + SEM_OUTSYNCFENCE = 0x2, +}; + +enum qaic_sem_cmd { + SEM_NOP = 0, + SEM_INIT = 1, + SEM_INC = 2, + SEM_DEC = 3, + SEM_WAIT_EQUAL = 4, + SEM_WAIT_GT_EQ = 5, /**< Greater than or equal */ + SEM_WAIT_GT_0 = 6, /**< Greater than 0 */ +}; + +enum qaic_manage_transaction_type { + TRANS_UNDEFINED = 0, + TRANS_PASSTHROUGH_FROM_USR = 1, + TRANS_PASSTHROUGH_TO_USR = 2, + TRANS_PASSTHROUGH_FROM_DEV = 3, + TRANS_PASSTHROUGH_TO_DEV = 4, + TRANS_DMA_XFER_FROM_USR = 5, + TRANS_DMA_XFER_TO_DEV = 6, + TRANS_ACTIVATE_FROM_USR = 7, + TRANS_ACTIVATE_FROM_DEV = 8, + TRANS_ACTIVATE_TO_DEV = 9, + TRANS_DEACTIVATE_FROM_USR = 10, + TRANS_DEACTIVATE_FROM_DEV = 11, + TRANS_STATUS_FROM_USR = 12, + TRANS_STATUS_TO_USR = 13, + TRANS_STATUS_FROM_DEV = 14, + TRANS_STATUS_TO_DEV = 15, + TRANS_TERMINATE_FROM_DEV = 16, + TRANS_TERMINATE_TO_DEV = 17, + TRANS_DMA_XFER_CONT = 18, + TRANS_VALIDATE_PARTITION_FROM_DEV = 19, + TRANS_VALIDATE_PARTITION_TO_DEV = 20, + TRANS_MAX = 21 +}; + +struct qaic_manage_trans_hdr { + __u32 type; /**< in, value from enum manage_transaction_type */ + __u32 len; /**< in, length of this transaction, including the header */ +}; + +struct qaic_manage_trans_passthrough { + struct qaic_manage_trans_hdr hdr; + __u8 data[]; /**< in, userspace must encode in little endian */ +}; + +struct qaic_manage_trans_dma_xfer { + struct qaic_manage_trans_hdr hdr; + __u32 tag; /**< in, device specific */ + __u32 count; /**< in */ + __u64 addr; /**< in, address of the data to transferred via DMA */ + __u64 size; /**< in, length of the data to transferred via DMA */ +}; + +struct qaic_manage_trans_activate_to_dev { + struct qaic_manage_trans_hdr hdr; + __u32 queue_size; /**< + * in, number of elements in DBC request + * and respose queue + */ + __u32 eventfd; /**< in */ + __u32 options; /**< in, device specific */ + __u32 pad; /**< pad must be 0 */ +}; + +struct qaic_manage_trans_activate_from_dev { + struct qaic_manage_trans_hdr hdr; + __u32 status; /**< out, status of activate transaction */ + __u32 dbc_id; /**< out, Identifier of assigned DMA Bridge channel */ + __u64 options; /**< out */ +}; + +struct qaic_manage_trans_deactivate { + struct qaic_manage_trans_hdr hdr; + __u32 dbc_id; /**< in, Identifier of assigned DMA Bridge channel */ + __u32 pad; /**< pad must be 0 */ +}; + +struct qaic_manage_trans_status_to_dev { + struct qaic_manage_trans_hdr hdr; +}; + +struct qaic_manage_trans_status_from_dev { + struct qaic_manage_trans_hdr hdr; + __u16 major; /**< out, major vesrion of NNC protocol used by device */ + __u16 minor; /**< out, minor vesrion of NNC protocol used by device */ + __u32 status; /**< out, status of query transaction */ + __u64 status_flags; /**< + * out + * 0 : If set then device has CRC check enabled + * 1:63 : Unused + */ +}; + +struct qaic_manage_msg { + __u32 len; /**< in, Length of valid data - ie sum of all transactions */ + __u32 count; /**< in, Number of transactions in message */ + __u64 data; /**< in, Pointer to array of transactions */ +}; + +struct qaic_create_bo { + __u64 size; /**< in, Size of BO in byte */ + __u32 handle; /**< out, Returned GEM handle for the BO */ + __u32 pad; /**< pad must be 0 */ +}; + +struct qaic_mmap_bo { + __u32 handle; /**< in, Handle for the BO being mapped. */ + __u32 pad; /**< pad must be 0 */ + __u64 offset; /**< + * out, offset into the drm node to use for + * subsequent mmap call + */ +}; + +/** + * @brief semaphore command + */ +struct qaic_sem { + __u16 val; /**< in, Only lower 12 bits are valid */ + __u8 index; /**< in, Only lower 5 bits are valid */ + __u8 presync; /**< in, 1 if presync operation, 0 if postsync */ + __u8 cmd; /**< in, See enum sem_cmd */ + __u8 flags; /**< in, See sem_flags for valid bits. All others must be 0 */ + __u16 pad; /**< pad must be 0 */ +}; + +struct qaic_attach_slice_entry { + __u64 size; /**< in, Size memory to allocate for this BO slice */ + struct qaic_sem sem0; /**< in, Must be zero if not valid */ + struct qaic_sem sem1; /**< in, Must be zero if not valid */ + struct qaic_sem sem2; /**< in, Must be zero if not valid */ + struct qaic_sem sem3; /**< in, Must be zero if not valid */ + __u64 dev_addr; /**< in, Address in device to/from which data is copied */ + __u64 db_addr; /**< in, Doorbell address */ + __u32 db_data; /**< in, Data to write to doorbell */ + __u32 db_len; /**< + * in, Doorbell length - 32, 16, or 8 bits. + * 0 means doorbell is inactive + */ + __u64 offset; /**< in, Offset from start of buffer */ +}; + +struct qaic_attach_slice_hdr { + __u32 count; /**< in, Number of slices for this BO */ + __u32 dbc_id; /**< in, Associate this BO with this DMA Bridge channel */ + __u32 handle; /**< in, Handle of BO to which slicing information is to be attached */ + __u32 dir; /**< in, Direction of data: 1 = DMA_TO_DEVICE, 2 = DMA_FROM_DEVICE */ + __u64 size; /**< + * in, Total length of BO + * If BO is imported (DMABUF/PRIME) then this size + * should not exceed the size of DMABUF provided. + * If BO is allocated using DRM_IOCTL_QAIC_CREATE_BO + * then this size should be exactly same as the size + * provided during DRM_IOCTL_QAIC_CREATE_BO. + */ +}; + +struct qaic_attach_slice { + struct qaic_attach_slice_hdr hdr; + __u64 data; /**< + * in, Pointer to a buffer which is container of + * struct qaic_attach_slice_entry[] + */ +}; + +struct qaic_execute_entry { + __u32 handle; /**< in, buffer handle */ + __u32 dir; /**< in, 1 = to device, 2 = from device */ +}; + +struct qaic_partial_execute_entry { + __u32 handle; /**< in, buffer handle */ + __u32 dir; /**< in, 1 = to device, 2 = from device */ + __u64 resize; /**< in, 0 = no resize */ +}; + +struct qaic_execute_hdr { + __u32 count; /**< in, number of executes following this header */ + __u32 dbc_id; /**< in, Identifier of assigned DMA Bridge channel */ +}; + +struct qaic_execute { + struct qaic_execute_hdr hdr; + __u64 data; /**< in, qaic_execute_entry or qaic_partial_execute_entry container */ +}; + +struct qaic_wait { + __u32 handle; /**< in, handle to wait on until execute is complete */ + __u32 timeout; /**< in, timeout for wait(in ms) */ + __u32 dbc_id; /**< in, Identifier of assigned DMA Bridge channel */ + __u32 pad; /**< pad must be 0 */ +}; + +struct qaic_perf_stats_hdr { + __u16 count; /**< in, Total number BOs requested */ + __u16 pad; /**< pad must be 0 */ + __u32 dbc_id; /**< in, Identifier of assigned DMA Bridge channel */ +}; + +struct qaic_perf_stats { + struct qaic_perf_stats_hdr hdr; + __u64 data; /**< in, qaic_perf_stats_entry container */ +}; + +struct qaic_perf_stats_entry { + __u32 handle; /**< in, Handle of the memory request */ + __u32 queue_level_before; /**< + * out, Number of elements in queue + * before submission given memory request + */ + __u32 num_queue_element; /**< + * out, Number of elements to add in the + * queue for given memory request + */ + __u32 submit_latency_us; /**< + * out, Time taken by kernel to submit + * the request to device + */ + __u32 device_latency_us; /**< + * out, Time taken by device to execute the + * request. 0 if request is not completed + */ + __u32 pad; /**< pad must be 0 */ +}; + +struct qaic_part_dev { + __u32 partition_id; /**< in, reservation id */ + __u16 remove; /**< in, 1 - Remove device 0 - Create device */ + __u16 pad; /**< pad must be 0 */ +}; + +#define DRM_QAIC_MANAGE 0x00 +#define DRM_QAIC_CREATE_BO 0x01 +#define DRM_QAIC_MMAP_BO 0x02 +#define DRM_QAIC_ATTACH_SLICE_BO 0x03 +#define DRM_QAIC_EXECUTE_BO 0x04 +#define DRM_QAIC_PARTIAL_EXECUTE_BO 0x05 +#define DRM_QAIC_WAIT_BO 0x06 +#define DRM_QAIC_PERF_STATS_BO 0x07 +#define DRM_QAIC_PART_DEV 0x08 + +#define DRM_IOCTL_QAIC_MANAGE DRM_IOWR(DRM_COMMAND_BASE + DRM_QAIC_MANAGE, struct qaic_manage_msg) +#define DRM_IOCTL_QAIC_CREATE_BO DRM_IOWR(DRM_COMMAND_BASE + DRM_QAIC_CREATE_BO, struct qaic_create_bo) +#define DRM_IOCTL_QAIC_MMAP_BO DRM_IOWR(DRM_COMMAND_BASE + DRM_QAIC_MMAP_BO, struct qaic_mmap_bo) +#define DRM_IOCTL_QAIC_ATTACH_SLICE_BO DRM_IOW(DRM_COMMAND_BASE + DRM_QAIC_ATTACH_SLICE_BO, struct qaic_attach_slice) +#define DRM_IOCTL_QAIC_EXECUTE_BO DRM_IOW(DRM_COMMAND_BASE + DRM_QAIC_EXECUTE_BO, struct qaic_execute) +#define DRM_IOCTL_QAIC_PARTIAL_EXECUTE_BO DRM_IOW(DRM_COMMAND_BASE + DRM_QAIC_PARTIAL_EXECUTE_BO, struct qaic_execute) +#define DRM_IOCTL_QAIC_WAIT_BO DRM_IOW(DRM_COMMAND_BASE + DRM_QAIC_WAIT_BO, struct qaic_wait) +#define DRM_IOCTL_QAIC_PERF_STATS_BO DRM_IOWR(DRM_COMMAND_BASE + DRM_QAIC_PERF_STATS_BO, struct qaic_perf_stats) +#define DRM_IOCTL_QAIC_PART_DEV DRM_IOWR(DRM_COMMAND_BASE + DRM_QAIC_PART_DEV, struct qaic_part_dev) + +#if defined(__CPLUSPLUS) +} +#endif + +#endif /* QAIC_DRM_H_ */ From patchwork Mon Aug 15 18:42:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597582 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DEA4C282E7 for ; Mon, 15 Aug 2022 19:28:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244055AbiHOT2S (ORCPT ); Mon, 15 Aug 2022 15:28:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344892AbiHOT1Z (ORCPT ); Mon, 15 Aug 2022 15:27:25 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C54715B07F; Mon, 15 Aug 2022 11:43:15 -0700 (PDT) Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FFPdqp024877; Mon, 15 Aug 2022 18:43:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=+CBHgz4opehVPDC+7miUFQAXIE2TJEdaD+kt/oubVxc=; b=nFeDmSWXA6WU9N35THzh+bYO6LNQTYVAeZrB6Lc1qga41oQdFbwCQAfVcqeuriYDWS3o ArDPR7NaVY/gYLPc/Xq7YaAfWLwZXMNJr9yRhSyKy5CQr7VIm8iVWQ4mmgviXAu0mI3w oHaenRyA1isHHXuemnkeHg+hgLczrOL3NZeAWc8st1YGR4eXXJO6Nz8H7XxA41ke++qe 6UwOTMfr9Yu5b4Qj8+6nLCknqN7a/uXiVn2R+beMvPUaAoVdaazJTY7wtKg40WAj7hPX aPHdA/l+ACk9xPlx5P2jwQ/jpevKUIVZjw98zrLrSJqvEwo4KZAkbtYnxX6mkJIgIy1p MA== Received: from nalasppmta03.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx467e5c0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:09 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIh8h8002524 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:08 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:07 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 03/14] drm/qaic: Add qaic.h internal header Date: Mon, 15 Aug 2022 12:42:25 -0600 Message-ID: <1660588956-24027-4-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: abqtz3WO2FKHzj7_DV9IDeW8tqjoYaI9 X-Proofpoint-GUID: abqtz3WO2FKHzj7_DV9IDeW8tqjoYaI9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 mlxlogscore=999 phishscore=0 clxscore=1015 lowpriorityscore=0 bulkscore=0 impostorscore=0 priorityscore=1501 mlxscore=0 suspectscore=0 spamscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org qaic.h contains all of the structs and defines that get passed around to the various components of the driver. Change-Id: I8349ac831a55daad3ac67ab763c2e815bb051be0 Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/qaic.h | 396 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 396 insertions(+) create mode 100644 drivers/gpu/drm/qaic/qaic.h diff --git a/drivers/gpu/drm/qaic/qaic.h b/drivers/gpu/drm/qaic/qaic.h new file mode 100644 index 0000000..07c25c1 --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic.h @@ -0,0 +1,396 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2019-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef QAICINTERNAL_H_ +#define QAICINTERNAL_H_ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define QAIC_DBC_BASE 0x20000 +#define QAIC_DBC_SIZE 0x1000 + +#define QAIC_NO_PARTITION -1 + +#define QAIC_DBC_OFF(i) ((i) * QAIC_DBC_SIZE + QAIC_DBC_BASE) + +#define to_qaic_bo(obj) container_of(obj, struct qaic_bo, base) + +extern bool poll_datapath; + +enum dbc_states { + DBC_STATE_IDLE = 0, + DBC_STATE_ASSIGNED = 1, + DBC_STATE_BEFORE_SHUTDOWN = 2, + DBC_STATE_AFTER_SHUTDOWN = 3, + DBC_STATE_BEFORE_POWER_UP = 4, + DBC_STATE_AFTER_POWER_UP = 5, + DBC_STATE_MAX = 6, +}; + +struct qaic_user { + /* PID of the process that opened this drm device */ + pid_t handle; + struct kref ref_count; + /* Char device opened by this user */ + struct qaic_drm_device *qddev; + /* Node in list of users that opened this drm device */ + struct list_head node; + /* SRCU used to synchronize this user during cleanup */ + struct srcu_struct qddev_lock; + atomic_t chunk_id; +}; + +struct dma_bridge_chan { + /* Pointer to device strcut maintained by driver */ + struct qaic_device *qdev; + /* ID of this DMA bridge channel(DBC) */ + unsigned int id; + /* Synchronizes access to xfer_list */ + spinlock_t xfer_lock; + /* Base address of request queue */ + void *req_q_base; + /* Base address of response queue */ + void *rsp_q_base; + /* + * Base bus address of request queue. Response queue bus address can be + * calculated by adding request queue size to this variable + */ + dma_addr_t dma_addr; + /* Total size of request and response queue in byte */ + u32 total_size; + /* Capacity of request/response queue */ + u32 nelem; + /* Synchronizes access to idr buf_handles */ + struct mutex handle_lock; + /* Hold memory handles for this DBC */ + struct idr buf_handles; + /* The user that opened this DBC */ + struct qaic_user *usr; + /* + * Request ID of next memory handle that goes in request queue. One + * memory handle can enqueue more than one request elements, all + * this requests that belong to same memory handle have same request ID + */ + u16 next_req_id; + /* TRUE: DBC is in use; FALSE: DBC not in use */ + bool in_use; + /* TRUE: This DBC is under sub system reset(SSR) */ + bool in_ssr; + /* Represents various states of this DBC from enum dbc_states */ + unsigned int state; + /* + * Base address of device registers. Used to read/write request and + * response queue's head and tail pointer of this DBC. + */ + void __iomem *dbc_base; + /* Head of list where each node is a memory handle queued in request queue */ + struct list_head xfer_list; + /* Synchronizes DBC readers during cleanup */ + struct srcu_struct ch_lock; + /* Debugfs root directory for this DBC */ + struct dentry *debugfs_root; + /* + * When this DBC is released, any thread waiting on this wait queue is + * woken up + */ + wait_queue_head_t dbc_release; + /* + * Points to a book keeping struct maintained by MHI SSR device while + * downloading a SSR crashdump. It is NULL when there no crashdump + * downloading in progress. + */ + void *dump_info; + /* Head of list where each node is a bo associated with this DBC */ + struct list_head bo_lists; + /* The irq line for this DBC. Used for polling */ + unsigned int irq; + /* Polling work item to simulate interrupts */ + struct work_struct poll_work; +}; + +struct qaic_device { + /* Pointer to base PCI device struct of our physical device */ + struct pci_dev *pdev; + /* Mask of all bars of this device */ + int bars; + /* Req. ID of request that will be queued next in MHI control device */ + u32 next_seq_num; + /* Base address of bar 0 */ + void __iomem *bar_0; + /* Base address of bar 2 */ + void __iomem *bar_2; + /* Controller structure for MHI devices */ + struct mhi_controller *mhi_cntl; + /* MHI control channel device */ + struct mhi_device *cntl_ch; + /* List of requests queued in MHI control device */ + struct list_head cntl_xfer_list; + /* Synchronizes MHI control device transactions and its xfer list */ + struct mutex cntl_mutex; + /* Base actual physical representation of drm device */ + struct qaic_drm_device *base_dev; + /* Array of DBC struct of this device */ + struct dma_bridge_chan *dbc; + /* Work queue for tasks related to MHI control device */ + struct workqueue_struct *cntl_wq; + /* Synchronizes all the users of device during cleanup */ + struct srcu_struct dev_lock; + /* Debugfs root directory for the device */ + struct dentry *debugfs_root; + /* HW monitoring device for this device */ + struct device *hwmon; + /* MHI telemetry channel device */ + struct mhi_device *tele_ch; + /* Head in list of requests queued in MHI telemetry device */ + struct list_head tele_xfer_list; + /* Req. ID of request that will be queued next in MHI telemetry device */ + u32 tele_next_seq_num; + /* + * TRUE: A tx MHI tansaction has failed and a rx buffer is still queued + * in telemetry device. Such a buffer is considered lost rx buffer + * FALSE: No rx buffer is lost in telemetry device + */ + bool tele_lost_buf; + /* TRUE: Device under reset; FALSE: Device not under reset */ + bool in_reset; + /* + * TRUE: A tx MHI tansaction has failed and a rx buffer is still queued + * in control device. Such a buffer is considered lost rx buffer + * FALSE: No rx buffer is lost in control device + */ + bool cntl_lost_buf; + /* Synchronizes MHI telemetry device transactions and its xfer list */ + struct mutex tele_mutex; + /* Work queue for tasks related to MHI telemetry device */ + struct workqueue_struct *tele_wq; + /* MHI RAS channel device */ + struct mhi_device *ras_ch; + unsigned int ce_count; + unsigned int ue_count; + unsigned int ue_nf_count; + /* Maximum number of DBC supported by this device */ + u32 num_dbc; + /* Head of list of page allocated by MHI bootlog device */ + struct list_head bootlog; + /* MHI bootlog channel device */ + struct mhi_device *bootlog_ch; + /* Work queue for tasks related to MHI bootlog device */ + struct workqueue_struct *bootlog_wq; + /* Synchronizes access of pages in MHI bootlog device */ + struct mutex bootlog_mutex; + /* Head in list of drm device created on top of this device */ + struct list_head qaic_drm_devices; + /* Synchronizes access of qaic_drm_devices list */ + struct mutex qaic_drm_devices_mutex; + /* MHI SSR channel device */ + struct mhi_device *ssr_ch; + /* Work queue for tasks related to MHI SSR device */ + struct workqueue_struct *ssr_wq; + /* Generate the CRC of a control message */ + u32 (*gen_crc)(void *msg); + /* Validate the CRC of a control message */ + bool (*valid_crc)(void *msg); +}; + +struct qaic_drm_device { + /* Pointer to the root device struct driven by this driver */ + struct qaic_device *qdev; + /* Node in list of drm devices maintained by root device */ + struct list_head node; + /* + * The physical device can be partition in number of logical devices. + * And each logical device is given a partition id. This member stores + * that id. QAIC_NO_PARTITION is a sentinel used to mark that this drm + * device is the actual physical device + */ + s32 partition_id; + /* + * It points to the user that created this drm device. It is NULL + * when this drm device represents the physical device i.e. + * partition_id is QAIC_NO_PARTITION + */ + struct qaic_user *owner; + /* Pointer to the drm device struct of this drm device */ + struct drm_device *ddev; + /* Head in list of users who have opened this drm device */ + struct list_head users; + /* Synchronizes access to users list */ + struct mutex users_mutex; + /* Pointer to array of DBC sysfs attributes */ + void *sysfs_attrs; +}; + +struct qaic_bo { + struct drm_gem_object base; + /* Scatter/gather table for allocate/imported BO */ + struct sg_table *sgt; + /* BO size requested by user. GEM object might be bigger in size. */ + u64 size; + /* Head in list of slices of this BO */ + struct list_head slices; + /* Total nents, for all slices of this BO */ + int total_slice_nents; + /* + * Direction of transfer. It can assume only two value DMA_TO_DEVICE and + * DMA_FROM_DEVICE. + */ + int dir; + /* The pointer of the DBC which operates on this BO */ + struct dma_bridge_chan *dbc; + /* Number of slice that belongs to this buffer */ + u32 nr_slice; + /* Number of slice that have been transferred by DMA engine */ + u32 nr_slice_xfer_done; + /* TRUE = BO is queued for execution, FALSE = BO is not queued */ + bool queued; + /** + * If TRUE then user has attached slicing information to this BO by + * calling DRM_IOCTL_QAIC_ATTACH_SLICE_BO ioctl. + */ + bool sliced; + /* Request ID of this BO if it is queued for execution */ + u16 req_id; + /* Handle assigned to this BO */ + u32 handle; + /* Wait on this for completion of DMA transfer of this BO */ + struct completion xfer_done; + /* + * Node in linked list where head is dbc->xfer_list. + * This link list contain bo's that are queued for DMA transfer. + */ + struct list_head xfer_list; + /* + * Node in linked list where head is dbc->bo_lists. + * This link list contain bo's that are associated with the DBC it is + * linked to. + */ + struct list_head bo_list; + struct { + /** + * Latest timestamp(ns) at which kernel received a request to + * execute this BO + */ + u64 req_received_ts; + /** + * Latest timestamp(ns) at which kernel enqueued requests of + * this BO for execution in DMA queue + */ + u64 req_submit_ts; + /** + * Latest timestamp(ns) at which kernel received a completion + * interrupt for requests of this BO + */ + u64 req_processed_ts; + /** + * Number of elements already enqueued in DMA queue before + * enqueuing requests of this BO + */ + u32 queue_level_before; + } perf_stats; + +}; + +struct bo_slice { + /* Mapped pages */ + struct sg_table *sgt; + /* Number of requests required to queue in DMA queue */ + int nents; + /* See enum dma_data_direction */ + int dir; + /* Actual requests that will be copied in DMA queue */ + struct dbc_req *reqs; + struct kref ref_count; + /* TRUE: No DMA transfer required */ + bool no_xfer; + /* Pointer to the parent bo handle */ + struct qaic_bo *bo; + /* Node in list of slices maintained by parent BO */ + struct list_head slice; + /* Size of this slice in byte */ + u64 size; + /* Offset of this slice in buffer */ + u64 offset; +}; + +int get_dbc_req_elem_size(void); +int get_dbc_rsp_elem_size(void); +int get_cntl_version(struct qaic_device *qdev, struct qaic_user *usr, + u16 *major, u16 *minor); +int qaic_manage_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +int qaic_execute_ioctl(struct qaic_device *qdev, struct qaic_user *usr, + unsigned long arg, bool is_partial); +int qaic_wait_exec_ioctl(struct qaic_device *qdev, struct qaic_user *usr, + unsigned long arg); +int qaic_query_ioctl(struct qaic_device *qdev, struct qaic_user *usr, + unsigned long arg); +int qaic_data_mmap(struct qaic_device *qdev, struct qaic_user *usr, + struct vm_area_struct *vma); +void qaic_data_get_fifo_info(struct dma_bridge_chan *dbc, u32 *head, + u32 *tail); +int qaic_data_get_reservation(struct qaic_device *qdev, struct qaic_user *usr, + void *data, u32 *partition_id, + u16 *remove); +void qaic_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result); + +void qaic_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result); + +int qaic_control_open(struct qaic_device *qdev); +void qaic_control_close(struct qaic_device *qdev); +void qaic_release_usr(struct qaic_device *qdev, struct qaic_user *usr); + +irqreturn_t dbc_irq_threaded_fn(int irq, void *data); +irqreturn_t dbc_irq_handler(int irq, void *data); +int disable_dbc(struct qaic_device *qdev, u32 dbc_id, struct qaic_user *usr); +void enable_dbc(struct qaic_device *qdev, u32 dbc_id, struct qaic_user *usr); +void wakeup_dbc(struct qaic_device *qdev, u32 dbc_id); +void release_dbc(struct qaic_device *qdev, u32 dbc_id, bool set_state); + +void wake_all_cntl(struct qaic_device *qdev); +void qaic_dev_reset_clean_local_state(struct qaic_device *qdev, bool exit_reset); + +int qaic_sysfs_init(struct qaic_drm_device *qdev); +void qaic_sysfs_remove(struct qaic_drm_device *qdev); +void set_dbc_state(struct qaic_device *qdev, u32 dbc_id, unsigned int state); + +void dbc_enter_ssr(struct qaic_device *qdev, u32 dbc_id); +void dbc_exit_ssr(struct qaic_device *qdev, u32 dbc_id); + +struct drm_gem_object *qaic_gem_prime_import(struct drm_device *dev, + struct dma_buf *dma_buf); + +int qaic_create_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +int qaic_mmap_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +int qaic_execute_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +int qaic_partial_execute_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +int qaic_wait_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +int qaic_test_print_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +int qaic_perf_stats_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); +void irq_polling_work(struct work_struct *work); + +#endif /* QAICINTERNAL_H_ */ From patchwork Mon Aug 15 18:42:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597290 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCC7EC2BB41 for ; Mon, 15 Aug 2022 19:28:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244151AbiHOT2T (ORCPT ); Mon, 15 Aug 2022 15:28:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344928AbiHOT13 (ORCPT ); Mon, 15 Aug 2022 15:27:29 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B59E5B78F; Mon, 15 Aug 2022 11:43:19 -0700 (PDT) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FI55v3026647; Mon, 15 Aug 2022 18:43:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=3QYVAqVw8SxZs5e2sW8RYQ/entVpPqjtUdvzI4O67Sg=; b=JfnsY0FIVgyLpxrJnqZF3nsV8aDV6lQEu3S47AbL6EK71FZDaTsA0FPT/cLfqmN7+aAt /nxoUCyWysDd2/RGKbnwjyq0dvUQWfKT6zp43/FD/sXTgmiv9JO7KmsVuc2oCbkO414i uzpZVFmwHqBBZwTdCtVF1EdEPXOC4GHit+YRBcyxRihplLiKba++34RP8luIzU59FDad fHXu5lwKUdrb5MTEVx32GPe3/N3VMhSkOQ6pLJQLJZeeIHpe26d+5ayqoovYg47sTAvp c1ZwGjN5uh71eohtAnIh+45bzFQw+ahyh5je/J6nY6vu4kkRp/q9OzQlf2tLhFHrY+x2 Ig== Received: from nalasppmta02.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx420p408-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:10 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIh90c031946 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:09 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:08 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 04/14] drm/qaic: Add MHI controller Date: Mon, 15 Aug 2022 12:42:26 -0600 Message-ID: <1660588956-24027-5-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: ARpUWzlXLSxCd8ZW-dQlpFW97L52h5ey X-Proofpoint-ORIG-GUID: ARpUWzlXLSxCd8ZW-dQlpFW97L52h5ey X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 clxscore=1015 lowpriorityscore=0 mlxlogscore=999 impostorscore=0 adultscore=0 mlxscore=0 spamscore=0 priorityscore=1501 suspectscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org A QAIC device contains a MHI interface with a number of different channels for controlling different aspects of the device. The MHI controller works with the MHI bus to enable and drive that interface. Change-Id: I77363193b1a2dece7abab287a6acef3cac1b4e1b Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/mhi_controller.c | 575 ++++++++++++++++++++++++++++++++++ drivers/gpu/drm/qaic/mhi_controller.h | 18 ++ 2 files changed, 593 insertions(+) create mode 100644 drivers/gpu/drm/qaic/mhi_controller.c create mode 100644 drivers/gpu/drm/qaic/mhi_controller.h diff --git a/drivers/gpu/drm/qaic/mhi_controller.c b/drivers/gpu/drm/qaic/mhi_controller.c new file mode 100644 index 0000000..e88e0fe --- /dev/null +++ b/drivers/gpu/drm/qaic/mhi_controller.c @@ -0,0 +1,575 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2019-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include +#include +#include +#include +#include +#include +#include + +#include "mhi_controller.h" +#include "qaic.h" + +#define MAX_RESET_TIME_SEC 25 + +static unsigned int mhi_timeout = 2000; /* 2 sec default */ +module_param(mhi_timeout, uint, 0600); + +static struct mhi_channel_config aic100_channels[] = { + { + .name = "QAIC_LOOPBACK", + .num = 0, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_LOOPBACK", + .num = 1, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_SAHARA", + .num = 2, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_SBL, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_SAHARA", + .num = 3, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_SBL, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_DIAG", + .num = 4, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_DIAG", + .num = 5, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_SSR", + .num = 6, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_SSR", + .num = 7, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_QDSS", + .num = 8, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_QDSS", + .num = 9, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_CONTROL", + .num = 10, + .num_elements = 128, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_CONTROL", + .num = 11, + .num_elements = 128, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_LOGGING", + .num = 12, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_SBL, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_LOGGING", + .num = 13, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_SBL, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_STATUS", + .num = 14, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_STATUS", + .num = 15, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_TELEMETRY", + .num = 16, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_TELEMETRY", + .num = 17, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_DEBUG", + .num = 18, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_DEBUG", + .num = 19, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .name = "QAIC_TIMESYNC", + .num = 20, + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_TO_DEVICE, + .ee_mask = MHI_CH_EE_SBL | MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, + { + .num = 21, + .name = "QAIC_TIMESYNC", + .num_elements = 32, + .local_elements = 0, + .event_ring = 0, + .dir = DMA_FROM_DEVICE, + .ee_mask = MHI_CH_EE_SBL | MHI_CH_EE_AMSS, + .pollcfg = 0, + .doorbell = MHI_DB_BRST_DISABLE, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + .wake_capable = false, + }, +}; + +static struct mhi_event_config aic100_events[] = { + { + .num_elements = 32, + .irq_moderation_ms = 0, + .irq = 0, + .channel = U32_MAX, + .priority = 1, + .mode = MHI_DB_BRST_DISABLE, + .data_type = MHI_ER_CTRL, + .hardware_event = false, + .client_managed = false, + .offload_channel = false, + }, +}; + +static struct mhi_controller_config aic100_config = { + .max_channels = 128, + .timeout_ms = 0, /* controlled by mhi_timeout */ + .buf_len = 0, + .num_channels = ARRAY_SIZE(aic100_channels), + .ch_cfg = aic100_channels, + .num_events = ARRAY_SIZE(aic100_events), + .event_cfg = aic100_events, + .use_bounce_buf = false, + .m2_no_db = false, +}; + +static int mhi_read_reg(struct mhi_controller *mhi_cntl, void __iomem *addr, u32 *out) +{ + u32 tmp = readl_relaxed(addr); + + if (tmp == U32_MAX) + return -EIO; + + *out = tmp; + + return 0; +} + +static void mhi_write_reg(struct mhi_controller *mhi_cntl, void __iomem *addr, + u32 val) +{ + writel_relaxed(val, addr); +} + +static int mhi_runtime_get(struct mhi_controller *mhi_cntl) +{ + return 0; +} + +static void mhi_runtime_put(struct mhi_controller *mhi_cntl) +{ +} + +static void mhi_status_cb(struct mhi_controller *mhi_cntl, enum mhi_callback reason) +{ + struct qaic_device *qdev = pci_get_drvdata(to_pci_dev(mhi_cntl->cntrl_dev)); + + /* this event occurs in atomic context */ + if (reason == MHI_CB_FATAL_ERROR) + pci_err(qdev->pdev, "Fatal error received from device. Attempting to recover\n"); + /* this event occurs in non-atomic context */ + if (reason == MHI_CB_SYS_ERROR && !qdev->in_reset) + qaic_dev_reset_clean_local_state(qdev, true); +} + +static int mhi_reset_and_async_power_up(struct mhi_controller *mhi_cntl) +{ + char time_sec = 1; + int current_ee; + int ret; + + /* Reset the device to bring the device in PBL EE */ + mhi_soc_reset(mhi_cntl); + + /* + * Keep checking the execution environment(EE) after every 1 second + * interval. + */ + do { + msleep(1000); + current_ee = mhi_get_exec_env(mhi_cntl); + } while (current_ee != MHI_EE_PBL && time_sec++ <= MAX_RESET_TIME_SEC); + + /* If the device is in PBL EE retry power up */ + if (current_ee == MHI_EE_PBL) + ret = mhi_async_power_up(mhi_cntl); + else + ret = -EIO; + + return ret; +} + +struct mhi_controller *qaic_mhi_register_controller(struct pci_dev *pci_dev, + void __iomem *mhi_bar, + int mhi_irq) +{ + struct mhi_controller *mhi_cntl; + int ret; + + mhi_cntl = kzalloc(sizeof(*mhi_cntl), GFP_KERNEL); + if (!mhi_cntl) + return ERR_PTR(-ENOMEM); + + mhi_cntl->cntrl_dev = &pci_dev->dev; + + /* + * Covers the entire possible physical ram region. Remote side is + * going to calculate a size of this range, so subtract 1 to prevent + * rollover. + */ + mhi_cntl->iova_start = 0; + mhi_cntl->iova_stop = PHYS_ADDR_MAX - 1; + + mhi_cntl->status_cb = mhi_status_cb; + mhi_cntl->runtime_get = mhi_runtime_get; + mhi_cntl->runtime_put = mhi_runtime_put; + mhi_cntl->read_reg = mhi_read_reg; + mhi_cntl->write_reg = mhi_write_reg; + mhi_cntl->regs = mhi_bar; + mhi_cntl->reg_len = SZ_4K; + mhi_cntl->nr_irqs = 1; + mhi_cntl->irq = kmalloc(sizeof(*mhi_cntl->irq), GFP_KERNEL); + + if (!mhi_cntl->irq) { + kfree(mhi_cntl); + return ERR_PTR(-ENOMEM); + } + + mhi_cntl->irq[0] = mhi_irq; + + mhi_cntl->fw_image = "qcom/aic100/sbl.bin"; + + /* use latest configured timeout */ + aic100_config.timeout_ms = mhi_timeout; + ret = mhi_register_controller(mhi_cntl, &aic100_config); + if (ret) { + pci_err(pci_dev, "mhi_register_controller failed %d\n", ret); + kfree(mhi_cntl->irq); + kfree(mhi_cntl); + return ERR_PTR(ret); + } + + ret = mhi_prepare_for_power_up(mhi_cntl); + if (ret) { + pci_err(pci_dev, "mhi_prepare_for_power_up failed %d\n", ret); + mhi_unregister_controller(mhi_cntl); + kfree(mhi_cntl->irq); + kfree(mhi_cntl); + return ERR_PTR(ret); + } + + ret = mhi_async_power_up(mhi_cntl); + /* + * If EIO is returned it is possible that device is in SBL EE, which is + * undesired. SOC reset the device and try to power up again. + */ + if (ret == -EIO && MHI_EE_SBL == mhi_get_exec_env(mhi_cntl)) { + pci_err(pci_dev, "Device is not expected to be SBL EE. SOC resetting the device to put it in PBL EE and again trying mhi async power up. Error %d\n", + ret); + ret = mhi_reset_and_async_power_up(mhi_cntl); + } + + if (ret) { + pci_err(pci_dev, "mhi_async_power_up failed %d\n", ret); + mhi_unprepare_after_power_down(mhi_cntl); + mhi_unregister_controller(mhi_cntl); + kfree(mhi_cntl->irq); + kfree(mhi_cntl); + return ERR_PTR(ret); + } + + return mhi_cntl; +} + +void qaic_mhi_free_controller(struct mhi_controller *mhi_cntl, bool link_up) +{ + mhi_power_down(mhi_cntl, link_up); + mhi_unprepare_after_power_down(mhi_cntl); + mhi_unregister_controller(mhi_cntl); + kfree(mhi_cntl->irq); + kfree(mhi_cntl); +} + +void qaic_mhi_start_reset(struct mhi_controller *mhi_cntl) +{ + mhi_power_down(mhi_cntl, true); +} + +void qaic_mhi_reset_done(struct mhi_controller *mhi_cntl) +{ + struct pci_dev *pci_dev = container_of(mhi_cntl->cntrl_dev, + struct pci_dev, dev); + int ret; + + ret = mhi_async_power_up(mhi_cntl); + if (ret) + pci_err(pci_dev, "mhi_async_power_up failed after reset %d\n", ret); +} diff --git a/drivers/gpu/drm/qaic/mhi_controller.h b/drivers/gpu/drm/qaic/mhi_controller.h new file mode 100644 index 0000000..5a739bb --- /dev/null +++ b/drivers/gpu/drm/qaic/mhi_controller.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2019-2020, The Linux Foundation. All rights reserved. + */ + +#ifndef MHICONTROLLERQAIC_H_ +#define MHICONTROLLERQAIC_H_ + +struct mhi_controller *qaic_mhi_register_controller(struct pci_dev *pci_dev, + void __iomem *mhi_bar, + int mhi_irq); + +void qaic_mhi_free_controller(struct mhi_controller *mhi_cntl, bool link_up); + +void qaic_mhi_start_reset(struct mhi_controller *mhi_cntl); +void qaic_mhi_reset_done(struct mhi_controller *mhi_cntl); + +#endif /* MHICONTROLLERQAIC_H_ */ From patchwork Mon Aug 15 18:42:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597578 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5819C28B2B for ; Mon, 15 Aug 2022 19:28:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244576AbiHOT2Z (ORCPT ); Mon, 15 Aug 2022 15:28:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344985AbiHOT1e (ORCPT ); Mon, 15 Aug 2022 15:27:34 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59D8C5B7AB; Mon, 15 Aug 2022 11:43:22 -0700 (PDT) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FDbTjo005180; Mon, 15 Aug 2022 18:43:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=Z96qyL7+YMGSU0uyaQIb9StZJil8HcbJmuVA2Ms9vXA=; b=bp/pOv/WG0feF04/SmscVRO3BfqPCpDmCqYxxpflZQw4M7LVcfwg5fwTb29W1+4vC6ld IfSEz8FPao52VcUCpMqJYiSInc5je+b1OPQpTwEiPUnn1dsIATXJB8uOxrCOlfhw/fFb rXWWYKaOQgF2rVl5lWNoivD2cH5WjJuYhqABMlE4uHOmHCiUcP8zajEaTGqcYMfObTH4 L7zaa7HIUaD182JLbd8tWBE/ushqVFCIcWL2mx3EC1S+orTR6LBoPuzplfeyVrB1o3rc aUzoymP2Atg4ckaFgXwgzR/hxCiYOxJZxSbws9pcmgtBpY+iQI0/tTB+3hoWXRsHob7V Jw== Received: from nalasppmta02.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx420p409-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:12 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIhBBt032543 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:11 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:10 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 05/14] drm/qaic: Add control path Date: Mon, 15 Aug 2022 12:42:27 -0600 Message-ID: <1660588956-24027-6-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 8fbyUykKDp1_D5VhVVNLpiWIJqWnygPy X-Proofpoint-ORIG-GUID: 8fbyUykKDp1_D5VhVVNLpiWIJqWnygPy X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 clxscore=1015 lowpriorityscore=0 mlxlogscore=999 impostorscore=0 adultscore=0 mlxscore=0 spamscore=0 priorityscore=1501 suspectscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add the control path component that talks to the management processor to load workloads onto the qaic device. This implements the driver portion of the NNC protocol. Change-Id: Ic9c0be41a91532843b78e49b32cf1fcf39faeb9f Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/qaic_control.c | 1788 +++++++++++++++++++++++++++++++++++ 1 file changed, 1788 insertions(+) create mode 100644 drivers/gpu/drm/qaic/qaic_control.c diff --git a/drivers/gpu/drm/qaic/qaic_control.c b/drivers/gpu/drm/qaic/qaic_control.c new file mode 100644 index 0000000..9a8a6b6 --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_control.c @@ -0,0 +1,1788 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2019-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "qaic.h" +#include "qaic_trace.h" + +#define MANAGE_MAGIC_NUMBER ((__force __le32)0x43494151) /* "QAIC" in little endian */ +#define QAIC_DBC_Q_GAP 0x100 +#define QAIC_DBC_Q_BUF_ALIGN 0x1000 +#define QAIC_MANAGE_EXT_MSG_LENGTH SZ_64K /* Max DMA message length */ +#define QAIC_WRAPPER_MAX_SIZE SZ_4K +#define QAIC_MHI_RETRY_WAIT_MS 100 +#define QAIC_MHI_RETRY_MAX 20 + +static unsigned int control_resp_timeout = 60; /* 60 sec default */ +module_param(control_resp_timeout, uint, 0600); + +struct manage_msg { + u32 len; + u32 count; + u8 data[]; +}; + +/* + * wire encoding structures for the manage protocol. + * All fields are little endian on the wire + */ +struct _msg_hdr { + __le32 crc32; /* crc of everything following this field in the message */ + __le32 magic_number; + __le32 sequence_number; + __le32 len; /* length of this message */ + __le32 count; /* number of transactions in this message */ + __le32 handle; /* unique id to track the resources consumed */ + __le32 partition_id; /* partition id for the request (signed)*/ + __le32 padding; /* must be 0 */ +} __packed; + +struct _msg { + struct _msg_hdr hdr; + u8 data[]; +} __packed; + +struct _trans_hdr { + __le32 type; + __le32 len; +} __packed; + +/* Each message sent from driver to device are organized in a list of wrapper_msg */ +struct wrapper_msg { + struct list_head list; + struct kref ref_count; + u32 len; /* length of data to transfer */ + struct wrapper_list *head; + union { + struct _msg msg; + struct _trans_hdr trans; + }; +}; + +struct wrapper_list { + struct list_head list; + spinlock_t lock; +}; + +struct _trans_passthrough { + struct _trans_hdr hdr; + u8 data[]; +} __packed; + +struct _addr_size_pair { + __le64 addr; + __le64 size; +} __packed; + +struct _trans_dma_xfer { + struct _trans_hdr hdr; + __le32 tag; + __le32 count; + __le32 dma_chunk_id; + __le32 padding; + struct _addr_size_pair data[]; +} __packed; + +/* Initiated by device to continue the DMA xfer of a large piece of data */ +struct _trans_dma_xfer_cont { + struct _trans_hdr hdr; + __le32 dma_chunk_id; + __le32 padding; + __le64 xferred_size; +} __packed; + +struct _trans_activate_to_dev { + struct _trans_hdr hdr; + __le64 req_q_addr; + __le64 rsp_q_addr; + __le32 req_q_size; + __le32 rsp_q_size; + __le32 buf_len; + __le32 options; /* unused, but BIT(16) has meaning to the device */ +} __packed; + +struct _trans_activate_from_dev { + struct _trans_hdr hdr; + __le32 status; + __le32 dbc_id; + __le64 options; /* unused */ +} __packed; + +struct _trans_deactivate_from_dev { + struct _trans_hdr hdr; + __le32 status; + __le32 dbc_id; +} __packed; + +struct _trans_terminate_to_dev { + struct _trans_hdr hdr; + __le32 handle; + __le32 padding; +} __packed; + +struct _trans_terminate_from_dev { + struct _trans_hdr hdr; + __le32 status; + __le32 padding; +} __packed; + +struct _trans_status_to_dev { + struct _trans_hdr hdr; +} __packed; + +struct _trans_status_from_dev { + struct _trans_hdr hdr; + __le16 major; + __le16 minor; + __le32 status; + __le64 status_flags; +} __packed; + +struct _trans_validate_part_to_dev { + struct _trans_hdr hdr; + __le32 part_id; + __le32 padding; +} __packed; + +struct _trans_validate_part_from_dev { + struct _trans_hdr hdr; + __le32 status; + __le32 padding; +} __packed; + +struct xfer_queue_elem { + /* + * Node in list of ongoing transfer request on control channel. + * Maintained by root device struct + */ + struct list_head list; + /* Sequence number of this transfer request */ + u32 seq_num; + /* This is used to wait on until completion of transfer request */ + struct completion xfer_done; + /* Received data from device */ + void *buf; +}; + +struct dma_xfer { + /* Node in list of DMA transfers which is used for cleanup */ + struct list_head list; + /* SG table of memory used for DMA */ + struct sg_table *sgt; + /* Array pages used for DMA */ + struct page **page_list; + /* Number of pages used for DMA */ + unsigned long nr_pages; +}; + +struct ioctl_resources { + /* List of all DMA transfers which is used later for cleanup */ + struct list_head dma_xfers; + /* Base address of request queue which belongs to a DBC */ + void *buf; + /* + * Base bus address of request queue which belongs to a DBC. Response + * queue base bus address can be calculated by adding size of request + * queue to base bus address of request queue. + */ + dma_addr_t dma_addr; + /* Total size of request queue and response queue in byte */ + u32 total_size; + /* Total number of elements that can be queued in each of request and response queue */ + u32 nelem; + /* Base address of response queue which belongs to a DBC */ + void *rsp_q_base; + /* Status of the NNC message received */ + u32 status; + /* DBC id of the DBC received from device */ + u32 dbc_id; + /* + * DMA transfer request messages can be big in size and it may not be + * possible to send them in one shot. In such cases the messages are + * broken into chunks, this field stores ID of such chunks. + */ + u32 dma_chunk_id; + /* Total number of bytes transferred for a DMA xfer request */ + u64 xferred_dma_size; + /* Header of transaction message received from user. Used during DMA xfer request */ + void *trans_hdr; +}; + +struct resp_work { + struct work_struct work; + struct qaic_device *qdev; + void *buf; +}; + +/* + * Since we're working with little endian messages, its useful to be able to + * increment without filling a whole line with conversions back and forth just + * to add one(1) to a message count. + */ +static __le32 incr_le32(__le32 val) +{ + return cpu_to_le32(le32_to_cpu(val) + 1); +} + +static u32 gen_crc(void *msg) +{ + struct wrapper_list *wrappers = msg; + struct wrapper_msg *w; + u32 crc = ~0; + + list_for_each_entry(w, &wrappers->list, list) + crc = crc32(crc, &w->msg, w->len); + + return crc ^ ~0; +} + +static u32 gen_crc_stub(void *msg) +{ + return 0; +} + +static bool valid_crc(void *msg) +{ + struct _msg_hdr *hdr = msg; + bool ret; + u32 crc; + + /* + * CRC defaults to a "Little Endian" algorithm, however this does not + * mean that the output of CRC is stored in a little endian manner. The + * algorithm iterates through the input one slice at a time, and is + * "Little Endian" in that it treats each slice of increasing address as + * containing values greater than the previous slice (in a 32bit cycle). + * + * The output of this algorithm is always converted to the native + * endianness. + */ + crc = le32_to_cpu(hdr->crc32); + hdr->crc32 = 0; + ret = (crc32(~0, msg, le32_to_cpu(hdr->len)) ^ ~0) == crc; + hdr->crc32 = cpu_to_le32(crc); + return ret; +} + +static bool valid_crc_stub(void *msg) +{ + return true; +} + +static void free_wrapper(struct kref *ref) +{ + struct wrapper_msg *wrapper = container_of(ref, struct wrapper_msg, + ref_count); + + list_del(&wrapper->list); + kfree(wrapper); +} + +static void save_dbc_buf(struct qaic_device *qdev, + struct ioctl_resources *resources, + struct qaic_user *usr) +{ + u32 dbc_id = resources->dbc_id; + + if (resources->buf) { + wait_event_interruptible(qdev->dbc[dbc_id].dbc_release, + !qdev->dbc[dbc_id].in_use); + qdev->dbc[dbc_id].req_q_base = resources->buf; + qdev->dbc[dbc_id].rsp_q_base = resources->rsp_q_base; + qdev->dbc[dbc_id].dma_addr = resources->dma_addr; + qdev->dbc[dbc_id].total_size = resources->total_size; + qdev->dbc[dbc_id].nelem = resources->nelem; + enable_dbc(qdev, dbc_id, usr); + qdev->dbc[dbc_id].in_use = true; + set_dbc_state(qdev, dbc_id, DBC_STATE_ASSIGNED); + resources->buf = NULL; + } +} + +static void free_dbc_buf(struct qaic_device *qdev, + struct ioctl_resources *resources) +{ + if (resources->buf) + dma_free_coherent(&qdev->pdev->dev, resources->total_size, + resources->buf, resources->dma_addr); + resources->buf = NULL; +} + +static void free_dma_xfers(struct qaic_device *qdev, + struct ioctl_resources *resources) +{ + struct dma_xfer *xfer; + struct dma_xfer *x; + int i; + + list_for_each_entry_safe(xfer, x, &resources->dma_xfers, list) { + dma_unmap_sgtable(&qdev->pdev->dev, xfer->sgt, DMA_TO_DEVICE, 0); + sg_free_table(xfer->sgt); + kfree(xfer->sgt); + for (i = 0; i < xfer->nr_pages; ++i) + put_page(xfer->page_list[i]); + kfree(xfer->page_list); + list_del(&xfer->list); + kfree(xfer); + } +} + +static struct wrapper_msg *add_wrapper(struct wrapper_list *wrappers, u32 size) +{ + struct wrapper_msg *w = kzalloc(size, GFP_KERNEL); + + if (!w) + return NULL; + list_add_tail(&w->list, &wrappers->list); + kref_init(&w->ref_count); + w->head = wrappers; + return w; +} + +static int encode_passthrough(struct qaic_device *qdev, void *trans, + struct wrapper_list *wrappers, u32 *user_len) +{ + struct qaic_manage_trans_passthrough *in_trans = trans; + struct _trans_passthrough *out_trans; + struct wrapper_msg *trans_wrapper; + struct wrapper_msg *wrapper; + struct _msg *msg; + u32 msg_hdr_len; + + trace_qaic_encode_passthrough(qdev, in_trans); + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + msg_hdr_len = le32_to_cpu(msg->hdr.len); + + if (in_trans->hdr.len % 8 != 0) { + trace_encode_error(qdev, "Invalid data length of passthrough data. Data length should be multiple of 8."); + return -EINVAL; + } + + if (msg_hdr_len + in_trans->hdr.len > QAIC_MANAGE_EXT_MSG_LENGTH) { + trace_encode_error(qdev, "passthrough trans exceeds msg len"); + return -ENOSPC; + } + + trans_wrapper = add_wrapper(wrappers, + offsetof(struct wrapper_msg, trans) + + in_trans->hdr.len); + if (!trans_wrapper) { + trace_encode_error(qdev, "encode passthrough alloc fail"); + return -ENOMEM; + } + trans_wrapper->len = in_trans->hdr.len; + out_trans = (struct _trans_passthrough *)&trans_wrapper->trans; + + memcpy(out_trans, in_trans, in_trans->hdr.len); + msg->hdr.len = cpu_to_le32(msg_hdr_len + in_trans->hdr.len); + msg->hdr.count = incr_le32(msg->hdr.count); + *user_len += in_trans->hdr.len; + out_trans->hdr.type = cpu_to_le32(TRANS_PASSTHROUGH_TO_DEV); + out_trans->hdr.len = cpu_to_le32(in_trans->hdr.len); + + return 0; +} + +static int encode_dma(struct qaic_device *qdev, void *trans, + struct wrapper_list *wrappers, u32 *user_len, + struct ioctl_resources *resources, + struct qaic_user *usr) +{ + struct qaic_manage_trans_dma_xfer *in_trans = trans; + struct _trans_dma_xfer *out_trans; + struct wrapper_msg *trans_wrapper; + struct wrapper_msg *wrapper; + struct _addr_size_pair *asp; + unsigned long need_pages; + struct scatterlist *last; + struct page **page_list; + unsigned long nr_pages; + struct scatterlist *sg; + struct wrapper_msg *w; + struct dma_xfer *xfer; + struct sg_table *sgt; + unsigned int dma_len; + u64 dma_chunk_len; + struct _msg *msg; + u32 msg_hdr_len; + void *boundary; + int nents_dma; + int nents; + u32 size; + int ret; + int i; + + trace_qaic_encode_dma(qdev, in_trans); + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + msg_hdr_len = le32_to_cpu(msg->hdr.len); + + if (msg_hdr_len > (UINT_MAX - QAIC_MANAGE_EXT_MSG_LENGTH)) { + trace_encode_error(qdev, "msg hdr length too large"); + ret = -EINVAL; + goto out; + } + + /* There should be enough space to hold at least one ASP entry. */ + if (msg_hdr_len + sizeof(*out_trans) + sizeof(*asp) > + QAIC_MANAGE_EXT_MSG_LENGTH) { + trace_encode_error(qdev, "no space left in msg"); + ret = -ENOMEM; + goto out; + } + + if (in_trans->addr + in_trans->size < in_trans->addr || + !in_trans->size) { + trace_encode_error(qdev, "dma trans addr range overflow or no size"); + ret = -EINVAL; + goto out; + } + + xfer = kmalloc(sizeof(*xfer), GFP_KERNEL); + if (!xfer) { + trace_encode_error(qdev, "dma no mem for xfer"); + ret = -ENOMEM; + goto out; + } + + need_pages = DIV_ROUND_UP(in_trans->size + offset_in_page(in_trans->addr + + resources->xferred_dma_size) - + resources->xferred_dma_size, PAGE_SIZE); + + nr_pages = need_pages; + + while (1) { + page_list = kmalloc_array(nr_pages, sizeof(*page_list), + GFP_KERNEL | __GFP_NOWARN); + if (!page_list) { + nr_pages = nr_pages / 2; + if (!nr_pages) { + trace_encode_error(qdev, "dma page list alloc fail"); + ret = -ENOMEM; + goto free_resource; + } + } else { + break; + } + } + + ret = get_user_pages_fast(in_trans->addr + resources->xferred_dma_size, + nr_pages, 0, page_list); + if (ret < 0 || ret != nr_pages) { + trace_encode_error(qdev, "dma get user pages fail"); + ret = -EFAULT; + goto free_page_list; + } + + sgt = kmalloc(sizeof(*sgt), GFP_KERNEL); + if (!sgt) { + trace_encode_error(qdev, "dma sgt alloc fail"); + ret = -ENOMEM; + goto put_pages; + } + + ret = sg_alloc_table_from_pages(sgt, page_list, nr_pages, + offset_in_page(in_trans->addr + + resources->xferred_dma_size), + in_trans->size - resources->xferred_dma_size, GFP_KERNEL); + if (ret) { + trace_encode_error(qdev, "dma alloc table from pages fail"); + ret = -ENOMEM; + goto free_sgt; + } + + ret = dma_map_sgtable(&qdev->pdev->dev, sgt, DMA_TO_DEVICE, 0); + if (ret) { + trace_encode_error(qdev, "dma mapping failed"); + goto free_table; + } + + nents = sgt->nents; + /* + * It turns out several of the iommu drivers don't combine adjacent + * regions, which is really what we expect based on the description of + * dma_map_sgtable(), so lets see if that can be done. It makes our message + * more efficent. + */ + last = sgt->sgl; + nents_dma = nents; + size = QAIC_MANAGE_EXT_MSG_LENGTH - msg_hdr_len - sizeof(*out_trans); + for_each_sgtable_sg(sgt, sg, i) { + if (sg_dma_address(last) + sg_dma_len(last) != + sg_dma_address(sg)) { + size -= sizeof(*asp); + /* Save 1K for possible follow-up transactions. */ + if (size < SZ_1K) { + nents_dma = i; + break; + } + } + last = sg; + } + + trans_wrapper = add_wrapper(wrappers, QAIC_WRAPPER_MAX_SIZE); + if (!trans_wrapper) { + trace_encode_error(qdev, "encode dma alloc wrapper fail"); + ret = -ENOMEM; + goto dma_unmap; + } + out_trans = (struct _trans_dma_xfer *)&trans_wrapper->trans; + + asp = out_trans->data; + boundary = (void *)trans_wrapper + QAIC_WRAPPER_MAX_SIZE; + size = 0; + + last = sgt->sgl; + dma_len = 0; + w = trans_wrapper; + dma_chunk_len = 0; + /* Adjecent DMA entries could be stitched together. */ + for_each_sg(sgt->sgl, sg, nents_dma, i) { + /* hit a discontinuity, finalize segment and start new one */ + if (sg_dma_address(last) + sg_dma_len(last) != + sg_dma_address(sg)) { + asp->size = cpu_to_le64(dma_len); + dma_chunk_len += dma_len; + if (dma_len) { + asp++; + if ((void *)asp + sizeof(*asp) > boundary) { + w->len = (void *)asp - (void *)&w->msg; + size += w->len; + w = add_wrapper(wrappers, + QAIC_WRAPPER_MAX_SIZE); + if (!w) { + trace_encode_error(qdev, "encode dma wrapper alloc fail"); + ret = -ENOMEM; + goto dma_unmap; + } + boundary = (void *)w + + QAIC_WRAPPER_MAX_SIZE; + asp = (struct _addr_size_pair *)&w->msg; + } + } + dma_len = 0; + asp->addr = cpu_to_le64(sg_dma_address(sg)); + } + dma_len += sg_dma_len(sg); + last = sg; + } + /* finalize the last segment */ + asp->size = cpu_to_le64(dma_len); + w->len = (void *)asp + sizeof(*asp) - (void *)&w->msg; + size += w->len; + + msg->hdr.len = cpu_to_le32(msg_hdr_len + size); + msg->hdr.count = incr_le32(msg->hdr.count); + + out_trans->hdr.type = cpu_to_le32(TRANS_DMA_XFER_TO_DEV); + out_trans->hdr.len = cpu_to_le32(size); + out_trans->tag = cpu_to_le32(in_trans->tag); + out_trans->count = cpu_to_le32((size - sizeof(*out_trans)) / sizeof(*asp)); + dma_chunk_len += dma_len; + + *user_len += in_trans->hdr.len; + + if (resources->dma_chunk_id) { + out_trans->dma_chunk_id = cpu_to_le32(resources->dma_chunk_id); + } else if (need_pages > nr_pages || nents_dma < nents) { + while (resources->dma_chunk_id == 0) + resources->dma_chunk_id = + atomic_inc_return(&usr->chunk_id); + + out_trans->dma_chunk_id = cpu_to_le32(resources->dma_chunk_id); + } + resources->xferred_dma_size += dma_chunk_len; + resources->trans_hdr = trans; + + xfer->sgt = sgt; + xfer->page_list = page_list; + xfer->nr_pages = nr_pages; + list_add(&xfer->list, &resources->dma_xfers); + return 0; + +dma_unmap: + dma_unmap_sgtable(&qdev->pdev->dev, sgt, DMA_TO_DEVICE, 0); +free_table: + sg_free_table(sgt); +free_sgt: + kfree(sgt); +put_pages: + for (i = 0; i < nr_pages; ++i) + put_page(page_list[i]); +free_page_list: + kfree(page_list); +free_resource: + kfree(xfer); +out: + return ret; +} + +static int encode_activate(struct qaic_device *qdev, void *trans, + struct wrapper_list *wrappers, + u32 *user_len, + struct ioctl_resources *resources) +{ + struct qaic_manage_trans_activate_to_dev *in_trans = trans; + struct _trans_activate_to_dev *out_trans; + struct wrapper_msg *trans_wrapper; + struct wrapper_msg *wrapper; + dma_addr_t dma_addr; + struct _msg *msg; + u32 msg_hdr_len; + void *buf; + u32 nelem; + u32 size; + int ret; + + trace_qaic_encode_activate(qdev, in_trans); + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + msg_hdr_len = le32_to_cpu(msg->hdr.len); + + if (msg_hdr_len + sizeof(*out_trans) > QAIC_MANAGE_MAX_MSG_LENGTH) { + trace_encode_error(qdev, "activate trans exceeds msg len"); + return -ENOSPC; + } + + if (!in_trans->queue_size) { + trace_encode_error(qdev, "activate unspecified queue size"); + return -EINVAL; + } + + if (in_trans->pad) { + trace_encode_error(qdev, "activate non-zero padding"); + return -EINVAL; + } + + nelem = in_trans->queue_size; + size = (get_dbc_req_elem_size() + get_dbc_rsp_elem_size()) * nelem; + if (size / nelem != get_dbc_req_elem_size() + get_dbc_rsp_elem_size()) { + trace_encode_error(qdev, "activate queue size overflow"); + return -EINVAL; + } + + if (size + QAIC_DBC_Q_GAP + QAIC_DBC_Q_BUF_ALIGN < size) { + trace_encode_error(qdev, "activate queue size align overflow"); + return -EINVAL; + } + + size = ALIGN((size + QAIC_DBC_Q_GAP), QAIC_DBC_Q_BUF_ALIGN); + + buf = dma_alloc_coherent(&qdev->pdev->dev, size, &dma_addr, GFP_KERNEL); + if (!buf) { + trace_encode_error(qdev, "activate queue alloc fail"); + return -ENOMEM; + } + + trans_wrapper = add_wrapper(wrappers, + offsetof(struct wrapper_msg, trans) + + sizeof(*out_trans)); + if (!trans_wrapper) { + trace_encode_error(qdev, "encode activate alloc fail"); + ret = -ENOMEM; + goto free_dma; + } + trans_wrapper->len = sizeof(*out_trans); + out_trans = (struct _trans_activate_to_dev *)&trans_wrapper->trans; + + out_trans->hdr.type = cpu_to_le32(TRANS_ACTIVATE_TO_DEV); + out_trans->hdr.len = cpu_to_le32(sizeof(*out_trans)); + out_trans->buf_len = cpu_to_le32(size); + out_trans->req_q_addr = cpu_to_le64(dma_addr); + out_trans->req_q_size = cpu_to_le32(nelem); + out_trans->rsp_q_addr = cpu_to_le64(dma_addr + size - nelem * + get_dbc_rsp_elem_size()); + out_trans->rsp_q_size = cpu_to_le32(nelem); + out_trans->options = cpu_to_le32(in_trans->options); + + *user_len += in_trans->hdr.len; + msg->hdr.len = cpu_to_le32(msg_hdr_len + sizeof(*out_trans)); + msg->hdr.count = incr_le32(msg->hdr.count); + + resources->buf = buf; + resources->dma_addr = dma_addr; + resources->total_size = size; + resources->nelem = nelem; + resources->rsp_q_base = buf + size - nelem * get_dbc_rsp_elem_size(); + return 0; + +free_dma: + dma_free_coherent(&qdev->pdev->dev, size, buf, dma_addr); + return ret; +} + +static int encode_deactivate(struct qaic_device *qdev, void *trans, + u32 *user_len, struct qaic_user *usr) +{ + struct qaic_manage_trans_deactivate *in_trans = trans; + + trace_qaic_encode_deactivate(qdev, in_trans); + + if (in_trans->dbc_id >= qdev->num_dbc || in_trans->pad) { + trace_encode_error(qdev, "deactivate invalid dbc id or pad non-zero"); + return -EINVAL; + } + + *user_len += in_trans->hdr.len; + + return disable_dbc(qdev, in_trans->dbc_id, usr); +} + +static int encode_status(struct qaic_device *qdev, void *trans, + struct wrapper_list *wrappers, + u32 *user_len) +{ + struct qaic_manage_trans_status_to_dev *in_trans = trans; + struct _trans_status_to_dev *out_trans; + struct wrapper_msg *trans_wrapper; + struct wrapper_msg *wrapper; + struct _msg *msg; + u32 msg_hdr_len; + + trace_qaic_encode_status(qdev, in_trans); + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + msg_hdr_len = le32_to_cpu(msg->hdr.len); + + if (msg_hdr_len + in_trans->hdr.len > QAIC_MANAGE_MAX_MSG_LENGTH) { + trace_encode_error(qdev, "status trans exceeds msg len"); + return -ENOSPC; + } + + trans_wrapper = add_wrapper(wrappers, sizeof(*trans_wrapper)); + if (!trans_wrapper) { + trace_encode_error(qdev, "encode status alloc fail"); + return -ENOMEM; + } + trans_wrapper->len = sizeof(*out_trans); + out_trans = (struct _trans_status_to_dev *)&trans_wrapper->trans; + + out_trans->hdr.type = cpu_to_le32(TRANS_STATUS_TO_DEV); + out_trans->hdr.len = cpu_to_le32(in_trans->hdr.len); + msg->hdr.len = cpu_to_le32(msg_hdr_len + in_trans->hdr.len); + msg->hdr.count = incr_le32(msg->hdr.count); + *user_len += in_trans->hdr.len; + + return 0; +} + +static int encode_message(struct qaic_device *qdev, + struct manage_msg *user_msg, + struct wrapper_list *wrappers, + struct ioctl_resources *resources, + struct qaic_user *usr) +{ + struct qaic_manage_trans_hdr *trans_hdr; + struct wrapper_msg *wrapper; + struct _msg *msg; + u32 user_len = 0; + int ret; + int i; + + if (!user_msg->count) { + trace_encode_error(qdev, "No transactions to encode"); + ret = -EINVAL; + goto out; + } + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + + msg->hdr.len = cpu_to_le32(sizeof(msg->hdr)); + + if (resources->dma_chunk_id) { + ret = encode_dma(qdev, resources->trans_hdr, wrappers, + &user_len, resources, usr); + msg->hdr.count = cpu_to_le32(1); + goto out; + } + + trace_qaic_control_dbg(qdev, "Number of transaction to encode is", + user_msg->count); + + for (i = 0; i < user_msg->count; ++i) { + if (user_len >= user_msg->len) { + trace_encode_error(qdev, "msg exceeds len"); + ret = -EINVAL; + break; + } + trans_hdr = (struct qaic_manage_trans_hdr *) + (user_msg->data + user_len); + if (user_len + trans_hdr->len > user_msg->len) { + trace_encode_error(qdev, "trans exceeds msg len"); + ret = -EINVAL; + break; + } + + trace_qaic_control_dbg(qdev, "Encoding transaction", + trans_hdr->type); + + switch (trans_hdr->type) { + case TRANS_PASSTHROUGH_FROM_USR: + ret = encode_passthrough(qdev, trans_hdr, wrappers, + &user_len); + break; + case TRANS_DMA_XFER_FROM_USR: + ret = encode_dma(qdev, trans_hdr, wrappers, &user_len, + resources, usr); + break; + case TRANS_ACTIVATE_FROM_USR: + ret = encode_activate(qdev, trans_hdr, wrappers, + &user_len, resources); + break; + case TRANS_DEACTIVATE_FROM_USR: + ret = encode_deactivate(qdev, trans_hdr, &user_len, usr); + break; + case TRANS_STATUS_FROM_USR: + ret = encode_status(qdev, trans_hdr, wrappers, + &user_len); + break; + default: + trace_encode_error(qdev, "unknown trans"); + ret = -EINVAL; + break; + } + + if (ret) + break; + } + + if (user_len != user_msg->len) { + trace_encode_error(qdev, "msg processed exceeds len"); + ret = -EINVAL; + } +out: + if (ret) { + free_dma_xfers(qdev, resources); + free_dbc_buf(qdev, resources); + return ret; + } + + return 0; +} + +static int decode_passthrough(struct qaic_device *qdev, void *trans, + struct manage_msg *user_msg, u32 *msg_len) +{ + struct _trans_passthrough *in_trans = trans; + struct qaic_manage_trans_passthrough *out_trans; + u32 len; + + out_trans = (void *)user_msg->data + user_msg->len; + + len = le32_to_cpu(in_trans->hdr.len); + if (len % 8 != 0) { + trace_decode_error(qdev, "Invalid data length of passthrough data. Data length should be multiple of 8."); + return -EINVAL; + } + if (user_msg->len + len > QAIC_MANAGE_MAX_MSG_LENGTH) { + trace_decode_error(qdev, "passthrough trans exceeds msg len"); + return -ENOSPC; + } + + memcpy(out_trans, in_trans, len); + user_msg->len += len; + *msg_len += len; + out_trans->hdr.type = le32_to_cpu(in_trans->hdr.type); + + trace_qaic_decode_passthrough(qdev, out_trans); + + return 0; +} + +static int decode_activate(struct qaic_device *qdev, void *trans, + struct manage_msg *user_msg, u32 *msg_len, + struct ioctl_resources *resources, + struct qaic_user *usr) +{ + struct _trans_activate_from_dev *in_trans = trans; + struct qaic_manage_trans_activate_from_dev *out_trans; + u32 len; + + out_trans = (void *)user_msg->data + user_msg->len; + + len = le32_to_cpu(in_trans->hdr.len); + if (user_msg->len + len > QAIC_MANAGE_MAX_MSG_LENGTH) { + trace_decode_error(qdev, "activate trans exceeds msg len"); + return -ENOSPC; + } + + user_msg->len += len; + *msg_len += len; + out_trans->hdr.type = le32_to_cpu(in_trans->hdr.type); + out_trans->hdr.len = len; + out_trans->status = le32_to_cpu(in_trans->status); + out_trans->dbc_id = le32_to_cpu(in_trans->dbc_id); + out_trans->options = le64_to_cpu(in_trans->options); + + if (!resources->buf) { + trace_decode_error(qdev, "activate with no assigned resources"); + /* how did we get an activate response with a request? */ + return -EINVAL; + } + + if (out_trans->dbc_id >= qdev->num_dbc) { + trace_decode_error(qdev, "activate invalid dbc id"); + /* + * The device assigned an invalid resource, which should never + * happen. Return an error so the user can try to recover. + */ + return -ENODEV; + } + + if (out_trans->status) { + trace_decode_error(qdev, "activate device failed"); + /* + * Allocating resources failed on device side. This is not an + * expected behaviour, user is expected to handle this situation. + */ + return -ECANCELED; + } + + resources->status = out_trans->status; + resources->dbc_id = out_trans->dbc_id; + save_dbc_buf(qdev, resources, usr); + + trace_qaic_decode_activate(qdev, out_trans); + + return 0; +} + +static int decode_deactivate(struct qaic_device *qdev, void *trans, + u32 *msg_len, struct qaic_user *usr) +{ + struct _trans_deactivate_from_dev *in_trans = trans; + u32 dbc_id = le32_to_cpu(in_trans->dbc_id); + u32 status = le32_to_cpu(in_trans->status); + + if (dbc_id >= qdev->num_dbc) { + trace_decode_error(qdev, "deactivate invalid dbc id"); + /* + * The device assigned an invalid resource, which should never + * happen. Inject an error so the user can try to recover. + */ + return -ENODEV; + } + if (status) { + trace_decode_error(qdev, "deactivate device failed"); + /* + * Releasing resources failed on the device side, which puts + * us in a bind since they may still be in use, so enable the + * dbc. User is expected to retry deactivation. + */ + enable_dbc(qdev, dbc_id, usr); + return -ECANCELED; + } + + release_dbc(qdev, dbc_id, true); + *msg_len += sizeof(*in_trans); + + trace_qaic_decode_deactivate(qdev, dbc_id, status); + + return 0; +} + +static int decode_status(struct qaic_device *qdev, void *trans, + struct manage_msg *user_msg, u32 *user_len, + struct _msg *msg) +{ + struct _trans_status_from_dev *in_trans = trans; + struct qaic_manage_trans_status_from_dev *out_trans; + u32 len; + + out_trans = (void *)user_msg->data + user_msg->len; + + len = le32_to_cpu(in_trans->hdr.len); + if (user_msg->len + len > QAIC_MANAGE_MAX_MSG_LENGTH) { + trace_decode_error(qdev, "status trans exceeds msg len"); + return -ENOSPC; + } + + out_trans->hdr.type = TRANS_STATUS_FROM_DEV; + out_trans->hdr.len = len; + out_trans->major = le16_to_cpu(in_trans->major); + out_trans->minor = le16_to_cpu(in_trans->minor); + out_trans->status_flags = le64_to_cpu(in_trans->status_flags); + out_trans->status = le32_to_cpu(in_trans->status); + *user_len += le32_to_cpu(in_trans->hdr.len); + user_msg->len += len; + + if (out_trans->status) { + trace_decode_error(qdev, "Querying status of device failed"); + return -ECANCELED; + } + if (out_trans->status_flags & BIT(0) && !valid_crc(msg)) { + trace_decode_error(qdev, "Bad CRC on rev'd message"); + return -EPIPE; + } + + trace_qaic_decode_status(qdev, out_trans); + + return 0; +} + +static int decode_message(struct qaic_device *qdev, + struct manage_msg *user_msg, struct _msg *msg, + struct ioctl_resources *resources, + struct qaic_user *usr) +{ + struct _trans_hdr *trans_hdr; + u32 msg_len = 0; + u32 msg_hdr_len = le32_to_cpu(msg->hdr.len); + int ret; + int i; + + if (msg_hdr_len > QAIC_MANAGE_MAX_MSG_LENGTH) { + trace_decode_error(qdev, "msg to decode len greater than size"); + return -EINVAL; + } + + user_msg->len = 0; + user_msg->count = le32_to_cpu(msg->hdr.count); + + trace_qaic_control_dbg(qdev, "Number of transaction to decode is", + user_msg->count); + + for (i = 0; i < user_msg->count; ++i) { + trans_hdr = (struct _trans_hdr *)(msg->data + msg_len); + if (msg_len + le32_to_cpu(trans_hdr->len) > msg_hdr_len) { + trace_decode_error(qdev, "trans len exceeds msg len"); + return -EINVAL; + } + + trace_qaic_control_dbg(qdev, "Decoding transaction", + le32_to_cpu(trans_hdr->type)); + + switch (le32_to_cpu(trans_hdr->type)) { + case TRANS_PASSTHROUGH_FROM_DEV: + ret = decode_passthrough(qdev, trans_hdr, user_msg, + &msg_len); + break; + case TRANS_ACTIVATE_FROM_DEV: + ret = decode_activate(qdev, trans_hdr, user_msg, + &msg_len, resources, usr); + break; + case TRANS_DEACTIVATE_FROM_DEV: + ret = decode_deactivate(qdev, trans_hdr, &msg_len, usr); + break; + case TRANS_STATUS_FROM_DEV: + ret = decode_status(qdev, trans_hdr, user_msg, + &msg_len, msg); + break; + default: + trace_decode_error(qdev, "unknown trans type"); + return -EINVAL; + } + + if (ret) + return ret; + } + + if (msg_len != (msg_hdr_len - sizeof(msg->hdr))) { + trace_decode_error(qdev, "decoded msg ended up longer than final trans"); + return -EINVAL; + } + + return 0; +} + +static void *msg_xfer(struct qaic_device *qdev, struct wrapper_list *wrappers, + u32 seq_num, bool ignore_signal) +{ + struct xfer_queue_elem elem; + struct wrapper_msg *w; + struct _msg *out_buf; + int retry_count; + long ret; + + if (qdev->in_reset) { + mutex_unlock(&qdev->cntl_mutex); + return ERR_PTR(-ENODEV); + } + + elem.seq_num = seq_num; + elem.buf = NULL; + init_completion(&elem.xfer_done); + if (likely(!qdev->cntl_lost_buf)) { + /* + * The max size of request to device is QAIC_MANAGE_EXT_MSG_LENGTH. + * The max size of response from device is QAIC_MANAGE_MAX_MSG_LENGTH. + */ + out_buf = kmalloc(QAIC_MANAGE_MAX_MSG_LENGTH, GFP_KERNEL); + if (!out_buf) { + mutex_unlock(&qdev->cntl_mutex); + return ERR_PTR(-ENOMEM); + } + + ret = mhi_queue_buf(qdev->cntl_ch, DMA_FROM_DEVICE, + out_buf, QAIC_MANAGE_MAX_MSG_LENGTH, + MHI_EOT); + if (ret) { + mutex_unlock(&qdev->cntl_mutex); + trace_qaic_mhi_queue_error(qdev, "mhi queue from device failed", + ret); + return ERR_PTR(ret); + } + } else { + /* + * we lost a buffer because we queued a recv buf, but then + * queuing the corresponding tx buf failed. To try to avoid + * a memory leak, lets reclaim it and use it for this + * transaction. + */ + qdev->cntl_lost_buf = false; + } + + list_for_each_entry(w, &wrappers->list, list) { + kref_get(&w->ref_count); + retry_count = 0; +retry: + ret = mhi_queue_buf(qdev->cntl_ch, DMA_TO_DEVICE, &w->msg, + w->len, + list_is_last(&w->list, &wrappers->list) ? + MHI_EOT : MHI_CHAIN); + if (ret) { + if (ret == -EAGAIN && + retry_count++ < QAIC_MHI_RETRY_MAX) { + msleep_interruptible(QAIC_MHI_RETRY_WAIT_MS); + if (!signal_pending(current)) + goto retry; + } + + qdev->cntl_lost_buf = true; + kref_put(&w->ref_count, free_wrapper); + mutex_unlock(&qdev->cntl_mutex); + trace_qaic_mhi_queue_error(qdev, "mhi queue to device failed", + ret); + return ERR_PTR(ret); + } + } + + list_add_tail(&elem.list, &qdev->cntl_xfer_list); + mutex_unlock(&qdev->cntl_mutex); + + if (ignore_signal) + ret = wait_for_completion_timeout(&elem.xfer_done, + control_resp_timeout * HZ); + else + ret = wait_for_completion_interruptible_timeout(&elem.xfer_done, + control_resp_timeout * HZ); + /* + * not using _interruptable because we have to cleanup or we'll + * likely cause memory corruption + */ + mutex_lock(&qdev->cntl_mutex); + if (!list_empty(&elem.list)) + list_del(&elem.list); + if (!ret && !elem.buf) + ret = -ETIMEDOUT; + else if (ret > 0 && !elem.buf) + ret = -EIO; + mutex_unlock(&qdev->cntl_mutex); + + if (ret < 0) { + trace_qaic_mhi_queue_error(qdev, "No response element from device", + ret); + kfree(elem.buf); + return ERR_PTR(ret); + } else if (!qdev->valid_crc(elem.buf)) { + trace_qaic_mhi_queue_error(qdev, "Bad CRC on rev'd message", + -EPIPE); + kfree(elem.buf); + return ERR_PTR(-EPIPE); + } + + return elem.buf; +} + +/* Add a transaction to abort the outstanding DMA continuation */ +static int abort_dma_cont(struct qaic_device *qdev, + struct wrapper_list *wrappers, u32 dma_chunk_id) +{ + struct _trans_dma_xfer *out_trans; + u32 size = sizeof(*out_trans); + struct wrapper_msg *wrapper; + struct wrapper_msg *w; + struct _msg *msg; + + wrapper = list_first_entry(&wrappers->list, struct wrapper_msg, list); + msg = &wrapper->msg; + + wrapper = add_wrapper(wrappers, + offsetof(struct wrapper_msg, trans) + sizeof(*out_trans)); + + if (!wrapper) { + trace_encode_error(qdev, "abort dma cont alloc fail"); + return -ENOMEM; + } + + /* Remove all but the first wrapper which has the msg header */ + list_for_each_entry_safe(wrapper, w, &wrappers->list, list) + if (!list_is_first(&wrapper->list, &wrappers->list)) + kref_put(&wrapper->ref_count, free_wrapper); + + out_trans = (struct _trans_dma_xfer *)&wrapper->trans; + out_trans->hdr.type = cpu_to_le32(TRANS_DMA_XFER_TO_DEV); + out_trans->hdr.len = cpu_to_le32(size); + out_trans->tag = cpu_to_le32(0); + out_trans->count = cpu_to_le32(0); + out_trans->dma_chunk_id = cpu_to_le32(dma_chunk_id); + + msg->hdr.len = cpu_to_le32(size + sizeof(*msg)); + msg->hdr.count = cpu_to_le32(1); + wrapper->len = size; + + return 0; +} + +static struct wrapper_list *alloc_wrapper_list(void) +{ + struct wrapper_list *wrappers; + + wrappers = kmalloc(sizeof(*wrappers), GFP_KERNEL); + if (!wrappers) + return NULL; + INIT_LIST_HEAD(&wrappers->list); + spin_lock_init(&wrappers->lock); + + return wrappers; +} + +static int __qaic_manage(struct qaic_device *qdev, struct qaic_user *usr, + struct manage_msg *user_msg, + struct ioctl_resources *resources, + struct _msg **rsp) +{ + struct wrapper_list *wrappers; + struct wrapper_msg *wrapper; + struct wrapper_msg *w; + bool all_done = false; + struct _msg *msg; + int ret; + + wrappers = alloc_wrapper_list(); + if (!wrappers) { + trace_manage_error(qdev, usr, "unable to alloc wrappers"); + return -ENOMEM; + } + + wrapper = add_wrapper(wrappers, sizeof(*wrapper)); + if (!wrapper) { + trace_manage_error(qdev, usr, "failed to add wrapper"); + kfree(wrappers); + return -ENOMEM; + } + + msg = &wrapper->msg; + wrapper->len = sizeof(*msg); + + ret = encode_message(qdev, user_msg, wrappers, resources, usr); + if (ret && resources->dma_chunk_id) + ret = abort_dma_cont(qdev, wrappers, resources->dma_chunk_id); + if (ret) + goto encode_failed; + + ret = mutex_lock_interruptible(&qdev->cntl_mutex); + if (ret) + goto lock_failed; + + msg->hdr.magic_number = MANAGE_MAGIC_NUMBER; + msg->hdr.sequence_number = cpu_to_le32(qdev->next_seq_num++); + + if (usr) { + msg->hdr.handle = cpu_to_le32(usr->handle); + msg->hdr.partition_id = cpu_to_le32(usr->qddev->partition_id); + } else { + msg->hdr.handle = 0; + msg->hdr.partition_id = cpu_to_le32(QAIC_NO_PARTITION); + } + + msg->hdr.padding = cpu_to_le32(0); + msg->hdr.crc32 = cpu_to_le32(qdev->gen_crc(wrappers)); + + /* msg_xfer releases the mutex */ + *rsp = msg_xfer(qdev, wrappers, qdev->next_seq_num - 1, false); + if (IS_ERR(*rsp)) { + trace_manage_error(qdev, usr, "failed to xmit to device"); + ret = PTR_ERR(*rsp); + } + +lock_failed: + free_dma_xfers(qdev, resources); +encode_failed: + spin_lock(&wrappers->lock); + list_for_each_entry_safe(wrapper, w, &wrappers->list, list) + kref_put(&wrapper->ref_count, free_wrapper); + all_done = list_empty(&wrappers->list); + spin_unlock(&wrappers->lock); + if (all_done) + kfree(wrappers); + + return ret; +} + +static int qaic_manage(struct qaic_device *qdev, struct qaic_user *usr, + struct manage_msg *user_msg) +{ + struct _trans_dma_xfer_cont *dma_cont = NULL; + struct ioctl_resources resources; + struct _msg *rsp = NULL; + int ret; + + memset(&resources, 0, sizeof(struct ioctl_resources)); + + INIT_LIST_HEAD(&resources.dma_xfers); + + if (user_msg->len > QAIC_MANAGE_MAX_MSG_LENGTH || + user_msg->count > QAIC_MANAGE_MAX_MSG_LENGTH / sizeof(struct qaic_manage_trans_hdr)) { + trace_manage_error(qdev, usr, "msg from userspace too long or too many transactions"); + return -EINVAL; + } + +dma_xfer_continue: + ret = __qaic_manage(qdev, usr, user_msg, &resources, &rsp); + if (ret) + return ret; + /* dma_cont should be the only transaction if present */ + if (le32_to_cpu(rsp->hdr.count) == 1) { + dma_cont = (struct _trans_dma_xfer_cont *)rsp->data; + if (le32_to_cpu(dma_cont->hdr.type) != TRANS_DMA_XFER_CONT) + dma_cont = NULL; + } + if (dma_cont) { + if (le32_to_cpu(dma_cont->dma_chunk_id) == resources.dma_chunk_id && + le64_to_cpu(dma_cont->xferred_size) == resources.xferred_dma_size) { + kfree(rsp); + goto dma_xfer_continue; + } + + trace_manage_error(qdev, usr, "wrong size/id for DMA continuation"); + ret = -EINVAL; + goto dma_cont_failed; + } + + ret = decode_message(qdev, user_msg, rsp, &resources, usr); + +dma_cont_failed: + free_dbc_buf(qdev, &resources); + kfree(rsp); + return ret; +} + +int qaic_manage_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct qaic_manage_msg *user_msg; + struct qaic_device *qdev; + struct manage_msg *msg; + struct qaic_user *usr; + u8 __user *user_data; + int qdev_rcu_id; + int usr_rcu_id; + int ret; + + usr = file_priv->driver_priv; + + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return -ENODEV; + } + + qdev = usr->qddev->qdev; + + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return -ENODEV; + } + + user_msg = data; + + if (user_msg->len > QAIC_MANAGE_MAX_MSG_LENGTH) { + trace_manage_error(qdev, usr, "user message too long"); + ret = -EINVAL; + goto out; + } + + msg = kzalloc(QAIC_MANAGE_MAX_MSG_LENGTH + sizeof(*msg), GFP_KERNEL); + if (!msg) { + trace_manage_error(qdev, usr, "no mem for userspace message"); + ret = -ENOMEM; + goto out; + } + + msg->len = user_msg->len; + msg->count = user_msg->count; + + user_data = u64_to_user_ptr(user_msg->data); + + if (copy_from_user(msg->data, user_data, user_msg->len)) { + trace_manage_error(qdev, usr, "failed to copy message body from userspace"); + ret = -EFAULT; + goto free_msg; + } + + ret = qaic_manage(qdev, usr, msg); + + /* + * If the qaic_manage() is successful then we copy the message onto + * userspace memory but we have an exception for -ECANCELED. + * For -ECANCELED, it means that device has NACKed the message with a + * status error code which userspace would like to know. + */ + if (ret == -ECANCELED || !ret) { + if (copy_to_user(user_data, msg->data, msg->len)) { + trace_manage_error(qdev, usr, "failed to copy to userspace"); + ret = -EFAULT; + } else { + user_msg->len = msg->len; + user_msg->count = msg->count; + } + } + +free_msg: + kfree(msg); +out: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +int get_cntl_version(struct qaic_device *qdev, struct qaic_user *usr, + u16 *major, u16 *minor) +{ + int ret; + struct manage_msg *user_msg; + struct qaic_manage_trans_status_to_dev *status_query; + struct qaic_manage_trans_status_from_dev *status_result; + + user_msg = kmalloc(sizeof(*user_msg) + sizeof(*status_result), GFP_KERNEL); + if (!user_msg) { + ret = -ENOMEM; + goto out; + } + user_msg->len = sizeof(*status_query); + user_msg->count = 1; + + status_query = (struct qaic_manage_trans_status_to_dev *)user_msg->data; + status_query->hdr.type = TRANS_STATUS_FROM_USR; + status_query->hdr.len = sizeof(status_query->hdr); + + ret = qaic_manage(qdev, usr, user_msg); + if (ret) + goto kfree_user_msg; + status_result = + (struct qaic_manage_trans_status_from_dev *)user_msg->data; + *major = status_result->major; + *minor = status_result->minor; + + if (status_result->status_flags & BIT(0)) { /* device is using CRC */ + /* By default qdev->gen_crc is programmed to generate CRC */ + qdev->valid_crc = valid_crc; + } else { + /* By default qdev->valid_crc is programmed to bypass CRC */ + qdev->gen_crc = gen_crc_stub; + } + +kfree_user_msg: + kfree(user_msg); +out: + return ret; +} + +static void resp_worker(struct work_struct *work) +{ + struct resp_work *resp = container_of(work, struct resp_work, work); + struct qaic_device *qdev = resp->qdev; + struct _msg *msg = resp->buf; + struct xfer_queue_elem *elem; + struct xfer_queue_elem *i; + bool found = false; + + if (msg->hdr.magic_number != MANAGE_MAGIC_NUMBER) { + kfree(msg); + kfree(resp); + return; + } + + mutex_lock(&qdev->cntl_mutex); + list_for_each_entry_safe(elem, i, &qdev->cntl_xfer_list, list) { + if (elem->seq_num == le32_to_cpu(msg->hdr.sequence_number)) { + found = true; + list_del_init(&elem->list); + elem->buf = msg; + complete_all(&elem->xfer_done); + break; + } + } + mutex_unlock(&qdev->cntl_mutex); + + if (!found) + /* request must have timed out, drop packet */ + kfree(msg); + + kfree(resp); +} + +static void free_wrapper_from_list(struct wrapper_list *wrappers, + struct wrapper_msg *wrapper) +{ + bool all_done = false; + + spin_lock(&wrappers->lock); + kref_put(&wrapper->ref_count, free_wrapper); + all_done = list_empty(&wrappers->list); + spin_unlock(&wrappers->lock); + + if (all_done) + kfree(wrappers); +} + +void qaic_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct _msg *msg = mhi_result->buf_addr; + struct wrapper_msg *wrapper = container_of(msg, struct wrapper_msg, + msg); + + free_wrapper_from_list(wrapper->head, wrapper); +} + +void qaic_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct qaic_device *qdev = dev_get_drvdata(&mhi_dev->dev); + struct _msg *msg = mhi_result->buf_addr; + struct resp_work *resp; + + if (mhi_result->transaction_status) { + kfree(msg); + return; + } + + resp = kmalloc(sizeof(*resp), GFP_ATOMIC); + if (!resp) { + pci_err(qdev->pdev, "dl_xfer_cb alloc fail, dropping message\n"); + kfree(msg); + return; + } + + INIT_WORK(&resp->work, resp_worker); + resp->qdev = qdev; + resp->buf = msg; + queue_work(qdev->cntl_wq, &resp->work); +} + +int qaic_control_open(struct qaic_device *qdev) +{ + if (!qdev->cntl_ch) + return -ENODEV; + + qdev->cntl_lost_buf = false; + /* + * By default qaic should assume that device has CRC enabled. + * Qaic comes to know if device has CRC enabled or disabled during the + * device status transaction, which is the first transaction performed + * on control channel. + * + * So CRC validation of first device status transaction response is + * ignored (by calling valid_crc_stub) and is done later during decoding + * if device has CRC enabled. + * Now that qaic knows whether device has CRC enabled or not it acts + * accordingly + */ + qdev->gen_crc = gen_crc; + qdev->valid_crc = valid_crc_stub; + + return mhi_prepare_for_transfer(qdev->cntl_ch); +} + +void qaic_control_close(struct qaic_device *qdev) +{ + mhi_unprepare_from_transfer(qdev->cntl_ch); +} + +void qaic_release_usr(struct qaic_device *qdev, struct qaic_user *usr) +{ + struct _trans_terminate_to_dev *trans; + struct wrapper_list *wrappers; + struct wrapper_msg *wrapper; + struct _msg *msg; + struct _msg *rsp; + + wrappers = alloc_wrapper_list(); + if (!wrappers) { + trace_manage_error(qdev, usr, "unable to alloc wrappers"); + return; + } + + wrapper = add_wrapper(wrappers, sizeof(*wrapper) + sizeof(*msg) + + sizeof(*trans)); + if (!wrapper) + return; + + msg = &wrapper->msg; + + trans = (struct _trans_terminate_to_dev *)msg->data; + + trans->hdr.type = cpu_to_le32(TRANS_TERMINATE_TO_DEV); + trans->hdr.len = cpu_to_le32(sizeof(*trans)); + trans->handle = cpu_to_le32(usr->handle); + + mutex_lock(&qdev->cntl_mutex); + wrapper->len = sizeof(msg->hdr) + sizeof(*trans); + msg->hdr.magic_number = MANAGE_MAGIC_NUMBER; + msg->hdr.sequence_number = cpu_to_le32(qdev->next_seq_num++); + msg->hdr.len = cpu_to_le32(wrapper->len); + msg->hdr.count = cpu_to_le32(1); + msg->hdr.handle = cpu_to_le32(usr->handle); + msg->hdr.padding = cpu_to_le32(0); + msg->hdr.crc32 = cpu_to_le32(qdev->gen_crc(wrappers)); + + /* + * msg_xfer releases the mutex + * We don't care about the return of msg_xfer since we will not do + * anything different based on what happens. + * We ignore pending signals since one will be set if the user is + * killed, and we need give the device a chance to cleanup, otherwise + * DMA may still be in progress when we return. + */ + rsp = msg_xfer(qdev, wrappers, qdev->next_seq_num - 1, true); + if (!IS_ERR(rsp)) + kfree(rsp); + free_wrapper_from_list(wrappers, wrapper); +} + +void wake_all_cntl(struct qaic_device *qdev) +{ + struct xfer_queue_elem *elem; + struct xfer_queue_elem *i; + + mutex_lock(&qdev->cntl_mutex); + list_for_each_entry_safe(elem, i, &qdev->cntl_xfer_list, list) { + list_del_init(&elem->list); + complete_all(&elem->xfer_done); + } + mutex_unlock(&qdev->cntl_mutex); +} + +int qaic_data_get_reservation(struct qaic_device *qdev, struct qaic_user *usr, + void *data, u32 *partition_id, u16 *remove) +{ + struct _trans_validate_part_from_dev *trans_rsp; + struct _trans_validate_part_to_dev *trans_req; + struct qaic_part_dev *user_msg; + struct wrapper_list *wrappers; + struct wrapper_msg *wrapper; + struct _msg *msg_req; + struct _msg *msg_rsp; + size_t msg_rsp_len; + int ret = 0; + + user_msg = (struct qaic_part_dev *)data; + /* -1 for partition_id is a special value, so check for it */ + if (user_msg->partition_id == QAIC_NO_PARTITION || user_msg->remove > 1) { + ret = -EINVAL; + goto out; + } + + *partition_id = user_msg->partition_id; + *remove = user_msg->remove; + + /* + * In case of a remove we do not need to do a fw partition check, the + * right user is validated when removing the device in the device + * remove code. So, in case remove is set to 1, we just copy the + * parameters and return from the call. + */ + if (*remove) + return 0; + + wrappers = alloc_wrapper_list(); + if (!wrappers) { + trace_manage_error(qdev, usr, "unable to alloc wrappers"); + return -ENOMEM; + } + + wrapper = add_wrapper(wrappers, sizeof(*wrapper) + sizeof(*msg_req) + + sizeof(*trans_req)); + if (!wrapper) { + kfree(wrappers); + return -ENOMEM; + } + + msg_req = &wrapper->msg; + + trans_req = (struct _trans_validate_part_to_dev *)msg_req->data; + trans_req->hdr.type = cpu_to_le32(TRANS_VALIDATE_PARTITION_TO_DEV); + trans_req->hdr.len = cpu_to_le32(sizeof(*trans_req)); + trans_req->part_id = cpu_to_le32(*partition_id); + + mutex_lock(&qdev->cntl_mutex); + wrapper->len = sizeof(msg_req->hdr) + sizeof(*trans_req); + msg_req->hdr.len = cpu_to_le32(wrapper->len); + msg_req->hdr.sequence_number = cpu_to_le32(qdev->next_seq_num++); + msg_req->hdr.magic_number = MANAGE_MAGIC_NUMBER; + msg_req->hdr.handle = cpu_to_le32(usr->handle); + msg_req->hdr.count = cpu_to_le32(1); + msg_req->hdr.padding = cpu_to_le32(0); + msg_req->hdr.crc32 = cpu_to_le32(qdev->gen_crc(wrappers)); + + /* + * msg_xfer releases the mutex + * The msg count will always be 1 in the response + */ + msg_rsp = msg_xfer(qdev, wrappers, qdev->next_seq_num - 1, false); + if (IS_ERR(msg_rsp)) { + ret = PTR_ERR(msg_rsp); + goto kfree_wrapper; + } + + msg_rsp_len = sizeof(msg_rsp->hdr) + sizeof(*trans_rsp); + if (le32_to_cpu(msg_rsp->hdr.count) != 1 || + le32_to_cpu(msg_rsp->hdr.len) < msg_rsp_len) { + ret = -EINVAL; + goto kfree_msg_rsp; + } + + trans_rsp = (struct _trans_validate_part_from_dev *)msg_rsp->data; + if (le32_to_cpu(trans_rsp->status)) + ret = -EPERM; + +kfree_msg_rsp: + kfree(msg_rsp); +kfree_wrapper: + free_wrapper_from_list(wrappers, wrapper); +out: + return ret; +} From patchwork Mon Aug 15 18:42:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597286 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3167C00140 for ; Mon, 15 Aug 2022 19:28:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244996AbiHOT22 (ORCPT ); Mon, 15 Aug 2022 15:28:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345010AbiHOT1g (ORCPT ); Mon, 15 Aug 2022 15:27:36 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 127745C345; Mon, 15 Aug 2022 11:43:25 -0700 (PDT) Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FHMfPw026417; Mon, 15 Aug 2022 18:43:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=UgeeQI3HIAd6cSXnb0yUBKbpBha8kEIfhIT8R5DTUGs=; b=k6hIlOXDQ1Qe9f3xCv/U9r4PshaD4bPz9FyxmVrfT+anPE9hWK8jy2efzrnXvXPiOpbr HlRI2Qf5Duf/1YZWbSuz5Ez4zhnSmx5wkoT9sMrnZeiwFl9/UK7+okdVyuzNc+Sijsww O8rjUMCsrCcycbTIDTGMov8vRvizl1cXp65KZ/uUCd6fS84g60/BLzxl3N1YVvZI6SDr ed9g7hYuZuObvnzEB7Yb6Qctrly/JdTLNdE5eTQAP+n+ihXZTfPTW73oJ+4MUw2gPvAM k1x4RosRPv8jIqXiHAJ4WeE+F/HwngULQdhEe1ZQlV70P/2pPz/IzZmSRNCloXwwMyBF ig== Received: from nalasppmta01.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx39re1r7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:14 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIhDgF028348 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:13 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:11 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 06/14] drm/qaic: Add datapath Date: Mon, 15 Aug 2022 12:42:28 -0600 Message-ID: <1660588956-24027-7-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: lc1pRkCvtv9vvV2I9p8M50uJA1UkBxOv X-Proofpoint-GUID: lc1pRkCvtv9vvV2I9p8M50uJA1UkBxOv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 priorityscore=1501 mlxscore=0 lowpriorityscore=0 bulkscore=0 adultscore=0 spamscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 suspectscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add the datapath component that manages BOs and submits them to running workloads on the qaic device via the dma_bridge hardware. Change-Id: I7a94cfb2741491f5fc044ae537f53d6cc0d97fee Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/qaic_data.c | 2152 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 2152 insertions(+) create mode 100644 drivers/gpu/drm/qaic/qaic_data.c diff --git a/drivers/gpu/drm/qaic/qaic_data.c b/drivers/gpu/drm/qaic/qaic_data.c new file mode 100644 index 0000000..12d8b39 --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_data.c @@ -0,0 +1,2152 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2019-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "qaic.h" +#include "qaic_trace.h" + +#define SEM_VAL_MASK GENMASK_ULL(11, 0) +#define SEM_INDEX_MASK GENMASK_ULL(4, 0) +#define BULK_XFER BIT(3) +#define GEN_COMPLETION BIT(4) +#define INBOUND_XFER 1 +#define OUTBOUND_XFER 2 +#define REQHP_OFF 0x0 /* we read this */ +#define REQTP_OFF 0x4 /* we write this */ +#define RSPHP_OFF 0x8 /* we write this */ +#define RSPTP_OFF 0xc /* we read this */ + +#define ENCODE_SEM(val, index, sync, cmd, flags) \ + ((val) | \ + (index) << 16 | \ + (sync) << 22 | \ + (cmd) << 24 | \ + ((cmd) ? BIT(31) : 0) | \ + (((flags) & SEM_INSYNCFENCE) ? BIT(30) : 0) | \ + (((flags) & SEM_OUTSYNCFENCE) ? BIT(29) : 0)) +#define NUM_EVENTS 128 +#define NUM_DELAYS 10 + +static unsigned int wait_exec_default_timeout = 5000; /* 5 sec default */ +module_param(wait_exec_default_timeout, uint, 0600); + +static unsigned int datapath_poll_interval_us = 100; /* 100 usec default */ +module_param(datapath_poll_interval_us, uint, 0600); + +struct dbc_req { /* everything must be little endian encoded */ + /* + * A request ID is assigned to each memory handle going in DMA queue. + * As a single memory handle can enqueue multiple elements in DMA queue + * all of them will have the same request ID. + */ + __le16 req_id; + /* Future use */ + __u8 seq_id; + /* + * Special encoded variable + * 7 0 - Do not force to generate MSI after DMA is completed + * 1 - Force to generate MSI after DMA is completed + * 6:5 Reserved + * 4 1 - Generate completion element in the response queue + * 0 - No Completion Code + * 3 0 - DMA request is a Link list transfer + * 1 - DMA request is a Bulk transfer + * 2 Reserved + * 1:0 00 - No DMA transfer involved + * 01 - DMA transfer is part of inbound transfer + * 10 - DMA transfer has outbound transfer + * 11 - NA + */ + __u8 cmd; + __le32 resv; + /* Source address for the transfer */ + __le64 src_addr; + /* Destination address for the transfer */ + __le64 dest_addr; + /* Length of transfer request */ + __le32 len; + __le32 resv2; + /* Doorbell address */ + __le64 db_addr; + /* + * Special encoded variable + * 7 1 - Doorbell(db) write + * 0 - No doorbell write + * 6:2 Reserved + * 1:0 00 - 32 bit access, db address must be aligned to 32bit-boundary + * 01 - 16 bit access, db address must be aligned to 16bit-boundary + * 10 - 8 bit access, db address must be aligned to 8bit-boundary + * 11 - Reserved + */ + __u8 db_len; + __u8 resv3; + __le16 resv4; + /* 32 bit data written to doorbeel address */ + __le32 db_data; + /* + * Special encoded variable + * All the fields of sem_cmdX are passed from user and all are ORed + * together to form sem_cmd. + * 0:11 Semaphore value + * 15:12 Reserved + * 20:16 Semaphore index + * 21 Reserved + * 22 Semaphore Sync + * 23 Reserved + * 26:24 Semaphore command + * 28:27 Reserved + * 29 Semaphore DMA out bound sync fence + * 30 Semaphore DMA in bound sync fence + * 31 Enable semaphore command + */ + __le32 sem_cmd0; + __le32 sem_cmd1; + __le32 sem_cmd2; + __le32 sem_cmd3; +} __packed; + +struct dbc_rsp { /* everything must be little endian encoded */ + /* Request ID of the memory handle whose DMA transaction is completed */ + __le16 req_id; + /* Status of the DMA transaction. 0 : Success otherwise failure */ + __le16 status; +} __packed; + +inline int get_dbc_req_elem_size(void) +{ + return sizeof(struct dbc_req); +} + +inline int get_dbc_rsp_elem_size(void) +{ + return sizeof(struct dbc_rsp); +} + +static int reserve_pages(unsigned long start_pfn, unsigned long nr_pages, + bool reserve) +{ + unsigned long pfn; + unsigned long end_pfn = start_pfn + nr_pages; + struct page *page; + + for (pfn = start_pfn; pfn < end_pfn; pfn++) { + if (!pfn_valid(pfn)) + return -EINVAL; + page = pfn_to_page(pfn); + if (reserve) + SetPageReserved(page); + else + ClearPageReserved(page); + } + return 0; +} + +static void free_slice(struct kref *kref) +{ + struct bo_slice *slice = container_of(kref, struct bo_slice, ref_count); + + list_del(&slice->slice); + drm_gem_object_put(&slice->bo->base); + sg_free_table(slice->sgt); + kfree(slice->sgt); + kfree(slice->reqs); + kfree(slice); +} + +static int copy_sgt(struct qaic_device *qdev, struct sg_table **sgt_out, + struct sg_table *sgt_in, u64 size, u64 offset) +{ + int total_len, len, nents, offf = 0, offl = 0; + struct scatterlist *sg, *sgn, *sgf, *sgl; + struct sg_table *sgt; + int ret, j; + + /* find out number of relevant nents needed for this mem */ + total_len = 0; + sgf = NULL; + sgl = NULL; + nents = 0; + + size = size ? size : PAGE_SIZE; + for (sg = sgt_in->sgl; sg; sg = sg_next(sg)) { + len = sg_dma_len(sg); + + if (!len) + continue; + if (offset >= total_len && offset < total_len + len) { + sgf = sg; + offf = offset - total_len; + } + if (sgf) + nents++; + if (offset + size >= total_len && + offset + size <= total_len + len) { + sgl = sg; + offl = offset + size - total_len; + break; + } + total_len += len; + } + + if (!sgf || !sgl) { + trace_qaic_mem_err(qdev, "Failed to find SG first and/or SG last", ret); + ret = -EINVAL; + goto out; + } + + sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); + if (!sgt) { + trace_qaic_mem_err(qdev, "Failed to allocate SG table structure", ret); + ret = -ENOMEM; + goto out; + } + + ret = sg_alloc_table(sgt, nents, GFP_KERNEL); + if (ret) { + trace_qaic_mem_err_1(qdev, "Failed to allocate SG table", + "SG table entries", ret, nents); + goto free_sgt; + } + + /* copy relevant sg node and fix page and length */ + sgn = sgf; + for_each_sgtable_sg(sgt, sg, j) { + memcpy(sg, sgn, sizeof(*sg)); + if (sgn == sgf) { + sg_dma_address(sg) += offf; + sg_dma_len(sg) -= offf; + sg_set_page(sg, sg_page(sgn), + sg_dma_len(sg), offf); + } else { + offf = 0; + } + if (sgn == sgl) { + sg_dma_len(sg) = offl - offf; + sg_set_page(sg, sg_page(sgn), + offl - offf, offf); + sg_mark_end(sg); + break; + } + sgn = sg_next(sgn); + } + + *sgt_out = sgt; + return ret; + +free_sgt: + kfree(sgt); +out: + *sgt_out = NULL; + return ret; +} + +static int encode_reqs(struct qaic_device *qdev, struct bo_slice *slice, + struct qaic_attach_slice_entry *req) +{ + __u8 cmd = BULK_XFER; + __le64 db_addr = cpu_to_le64(req->db_addr); + __u8 db_len; + __le32 db_data = cpu_to_le32(req->db_data); + struct scatterlist *sg; + u64 dev_addr; + int presync_sem; + int i; + + if (!slice->no_xfer) + cmd |= (slice->dir == DMA_TO_DEVICE ? INBOUND_XFER : + OUTBOUND_XFER); + + if (req->db_len && !IS_ALIGNED(req->db_addr, req->db_len / 8)) { + trace_qaic_mem_err_2(qdev, "Invalid Doorbell values", + "Doorbell length", "Doorbell address", + -EINVAL, req->db_len, req->db_addr); + return -EINVAL; + } + + presync_sem = req->sem0.presync + req->sem1.presync + + req->sem2.presync + req->sem3.presync; + if (presync_sem > 1) { + trace_qaic_mem_err_2(qdev, "Invalid presync values", + "sem0.presync", "sem1.presync", + -EINVAL, req->sem0.presync, + req->sem1.presync); + trace_qaic_mem_err_2(qdev, "", "sem2.presync", "sem3.presync", + -EINVAL, req->sem2.presync, + req->sem3.presync); + return -EINVAL; + } + + presync_sem = req->sem0.presync << 0 | req->sem1.presync << 1 | + req->sem2.presync << 2 | req->sem3.presync << 3; + + switch (req->db_len) { + case 32: + db_len = BIT(7); + break; + case 16: + db_len = BIT(7) | 1; + break; + case 8: + db_len = BIT(7) | 2; + break; + case 0: + db_len = 0; /* doorbell is not active for this command */ + break; + default: + trace_qaic_mem_err_1(qdev, "Invalid Doorbell length", "Doorbell length", + -EINVAL, req->db_len); + return -EINVAL; /* should never hit this */ + } + + /* + * When we end up splitting up a single request (ie a buf slice) into + * multiple DMA requests, we have to manage the sync data carefully. + * There can only be one presync sem. That needs to be on every xfer + * so that the DMA engine doesn't transfer data before the receiver is + * ready. We only do the doorbell and postsync sems after the xfer. + * To guarantee previous xfers for the request are complete, we use a + * fence. + */ + dev_addr = req->dev_addr; + for_each_sgtable_sg(slice->sgt, sg, i) { + slice->reqs[i].cmd = cmd; + slice->reqs[i].src_addr = + cpu_to_le64(slice->dir == DMA_TO_DEVICE ? + sg_dma_address(sg) : dev_addr); + slice->reqs[i].dest_addr = + cpu_to_le64(slice->dir == DMA_TO_DEVICE ? + dev_addr : sg_dma_address(sg)); + /* + * sg_dma_len(sg) returns size of a DMA segment, maximum DMA + * segment size is set to UINT_MAX by qaic and hence return + * values of sg_dma_len(sg) can never exceed u32 range. So, + * by down sizing we are not corrupting the value. + */ + slice->reqs[i].len = cpu_to_le32((u32)sg_dma_len(sg)); + switch (presync_sem) { + case BIT(0): + slice->reqs[i].sem_cmd0 = cpu_to_le32(ENCODE_SEM(req->sem0.val, + req->sem0.index, + req->sem0.presync, + req->sem0.cmd, + req->sem0.flags)); + break; + case BIT(1): + slice->reqs[i].sem_cmd1 = cpu_to_le32(ENCODE_SEM(req->sem1.val, + req->sem1.index, + req->sem1.presync, + req->sem1.cmd, + req->sem1.flags)); + break; + case BIT(2): + slice->reqs[i].sem_cmd2 = cpu_to_le32(ENCODE_SEM(req->sem2.val, + req->sem2.index, + req->sem2.presync, + req->sem2.cmd, + req->sem2.flags)); + break; + case BIT(3): + slice->reqs[i].sem_cmd3 = cpu_to_le32(ENCODE_SEM(req->sem3.val, + req->sem3.index, + req->sem3.presync, + req->sem3.cmd, + req->sem3.flags)); + break; + } + dev_addr += sg_dma_len(sg); + } + /* add post transfer stuff to last segment */ + i--; + slice->reqs[i].cmd |= GEN_COMPLETION; + slice->reqs[i].db_addr = db_addr; + slice->reqs[i].db_len = db_len; + slice->reqs[i].db_data = db_data; + /* + * Add a fence if we have more than one request going to the hardware + * representing the entirety of the user request, and the user request + * has no presync condition. + * Fences are expensive, so we try to avoid them. We rely on the + * hardware behavior to avoid needing one when there is a presync + * condition. When a presync exists, all requests for that same + * presync will be queued into a fifo. Thus, since we queue the + * post xfer activity only on the last request we queue, the hardware + * will ensure that the last queued request is processed last, thus + * making sure the post xfer activity happens at the right time without + * a fence. + */ + if (i && !presync_sem) + req->sem0.flags |= (slice->dir == DMA_TO_DEVICE ? + SEM_INSYNCFENCE : SEM_OUTSYNCFENCE); + slice->reqs[i].sem_cmd0 = cpu_to_le32(ENCODE_SEM(req->sem0.val, + req->sem0.index, + req->sem0.presync, + req->sem0.cmd, + req->sem0.flags)); + slice->reqs[i].sem_cmd1 = cpu_to_le32(ENCODE_SEM(req->sem1.val, + req->sem1.index, + req->sem1.presync, + req->sem1.cmd, + req->sem1.flags)); + slice->reqs[i].sem_cmd2 = cpu_to_le32(ENCODE_SEM(req->sem2.val, + req->sem2.index, + req->sem2.presync, + req->sem2.cmd, + req->sem2.flags)); + slice->reqs[i].sem_cmd3 = cpu_to_le32(ENCODE_SEM(req->sem3.val, + req->sem3.index, + req->sem3.presync, + req->sem3.cmd, + req->sem3.flags)); + + return 0; +} + +static int qaic_map_one_slice(struct qaic_device *qdev, struct qaic_bo *bo, + struct qaic_attach_slice_entry *slice_ent) +{ + struct sg_table *sgt = NULL; + struct bo_slice *slice; + int ret; + + ret = copy_sgt(qdev, &sgt, bo->sgt, slice_ent->size, slice_ent->offset); + if (ret) { + trace_qaic_mem_err(qdev, "Failed to copy sgt", ret); + goto out; + } + + slice = kmalloc(sizeof(*slice), GFP_KERNEL); + if (!slice) { + ret = -ENOMEM; + trace_qaic_mem_err(qdev, "Failed to allocate memory for slice handle", ret); + goto free_sgt; + } + + slice->reqs = kcalloc(sgt->nents, sizeof(*slice->reqs), GFP_KERNEL); + if (!slice->reqs) { + ret = -ENOMEM; + trace_qaic_mem_err(qdev, "Failed to allocate memory for requests", ret); + goto free_slice; + } + + slice->no_xfer = !slice_ent->size; + slice->sgt = sgt; + slice->nents = sgt->nents; + slice->dir = bo->dir; + slice->bo = bo; + slice->size = slice_ent->size; + slice->offset = slice_ent->offset; + + ret = encode_reqs(qdev, slice, slice_ent); + if (ret) { + trace_qaic_mem_err(qdev, "Failed to encode requests", ret); + goto free_req; + } + + bo->total_slice_nents += sgt->nents; + kref_init(&slice->ref_count); + drm_gem_object_get(&bo->base); + list_add_tail(&slice->slice, &bo->slices); + + return 0; + +free_req: + kfree(slice->reqs); +free_slice: + kfree(slice); +free_sgt: + sg_free_table(sgt); + kfree(sgt); +out: + return ret; +} + +static int create_sgt(struct qaic_device *qdev, struct sg_table **sgt_out, + u64 size) +{ + struct scatterlist *sg; + struct sg_table *sgt; + struct page **pages; + int *pages_order; + int buf_extra; + int max_order; + int nr_pages; + int ret = 0; + int i, j, k; + int order; + + if (size) { + nr_pages = DIV_ROUND_UP(size, PAGE_SIZE); + /* + * calculate how much extra we are going to allocate, to remove + * later + */ + buf_extra = (PAGE_SIZE - size % PAGE_SIZE) % PAGE_SIZE; + max_order = min(MAX_ORDER - 1, get_order(size)); + } else { + /* allocate a single page for book keeping */ + nr_pages = 1; + buf_extra = 0; + max_order = 0; + } + + pages = kvmalloc_array(nr_pages, sizeof(*pages) + sizeof(*pages_order), GFP_KERNEL); + if (!pages) { + ret = -ENOMEM; + goto out; + } + pages_order = (void *)pages + sizeof(*pages) * nr_pages; + + /* + * Allocate requested memory, using alloc_pages. It is possible to allocate + * the requested memory in multiple chunks by calling alloc_pages + * multiple times. Use SG table to handle multiple allocated pages. + */ + i = 0; + while (nr_pages > 0) { + order = min(get_order(nr_pages * PAGE_SIZE), max_order); + while (1) { + pages[i] = alloc_pages(GFP_KERNEL | GFP_HIGHUSER | + __GFP_NOWARN | __GFP_ZERO | + (order ? __GFP_NORETRY : __GFP_RETRY_MAYFAIL), + order); + if (pages[i]) + break; + if (!order--) { + ret = -ENOMEM; + trace_qaic_mem_err_1(qdev, "Kernel ran out of free pages", + "Memory requested in byte", + ret, nr_pages); + goto free_partial_alloc; + } + } + + max_order = order; + pages_order[i] = order; + + nr_pages -= 1 << order; + if (nr_pages <= 0) + /* account for over allocation */ + buf_extra += abs(nr_pages) * PAGE_SIZE; + i++; + } + + sgt = kmalloc(sizeof(*sgt), GFP_KERNEL); + if (!sgt) { + ret = -ENOMEM; + goto free_partial_alloc; + } + + if (sg_alloc_table(sgt, i, GFP_KERNEL)) { + ret = -ENOMEM; + goto free_sgt; + } + + /* Populate the SG table with the allocate memory pages */ + sg = sgt->sgl; + for (k = 0; k < i; k++, sg = sg_next(sg)) { + /* Last entry requires special handling */ + if (k < i - 1) { + sg_set_page(sg, pages[k], PAGE_SIZE << pages_order[k], 0); + } else { + sg_set_page(sg, pages[k], + (PAGE_SIZE << pages_order[k]) - buf_extra, 0); + sg_mark_end(sg); + } + + ret = reserve_pages(page_to_pfn(pages[k]), DIV_ROUND_UP(sg->length, PAGE_SIZE), + true); + if (ret) + goto clear_pages; + } + + kvfree(pages); + *sgt_out = sgt; + return ret; + +clear_pages: + for (j = 0; j < k; j++) + ret = reserve_pages(page_to_pfn(pages[j]), 1 << pages_order[j], + false); + sg_free_table(sgt); +free_sgt: + kfree(sgt); +free_partial_alloc: + for (j = 0; j < i; j++) + __free_pages(pages[j], pages_order[j]); + kvfree(pages); +out: + *sgt_out = NULL; + return ret; +} + +static bool invalid_sem(struct qaic_sem *sem) +{ + if (sem->val & ~SEM_VAL_MASK || sem->index & ~SEM_INDEX_MASK || + !(sem->presync == 0 || sem->presync == 1) || sem->pad || + sem->flags & ~(SEM_INSYNCFENCE | SEM_OUTSYNCFENCE) || + sem->cmd > SEM_WAIT_GT_0) + return true; + return false; +} + +static int qaic_validate_req(struct qaic_device *qdev, + struct qaic_attach_slice_entry *slice_ent, + u32 count, u64 total_size) +{ + int i; + + for (i = 0; i < count; i++) { + if (!(slice_ent[i].db_len == 32 || slice_ent[i].db_len == 16 || + slice_ent[i].db_len == 8 || slice_ent[i].db_len == 0) || + invalid_sem(&slice_ent[i].sem0) || + invalid_sem(&slice_ent[i].sem1) || + invalid_sem(&slice_ent[i].sem2) || + invalid_sem(&slice_ent[i].sem3)) { + trace_qaic_mem_err(qdev, "Invalid semaphore or doorbell len", -EINVAL); + return -EINVAL; + } + if (slice_ent[i].offset + slice_ent[i].size > total_size) { + trace_qaic_mem_err_1(qdev, "Invalid size of buffer slice", "Slice size", + -EINVAL, slice_ent[i].size); + trace_qaic_mem_err_2(qdev, "", "offset", "buffer slice size", + -EINVAL, slice_ent[i].offset, total_size); + return -EINVAL; + } + } + + return 0; +} + +static void qaic_free_sgt(struct sg_table *sgt) +{ + struct scatterlist *sg; + + for (sg = sgt->sgl; sg; sg = sg_next(sg)) + if (sg_page(sg)) { + reserve_pages(page_to_pfn(sg_page(sg)), + DIV_ROUND_UP(sg->length, PAGE_SIZE), false); + __free_pages(sg_page(sg), get_order(sg->length)); + } + sg_free_table(sgt); + kfree(sgt); +} + +static void qaic_gem_print_info(struct drm_printer *p, unsigned int indent, + const struct drm_gem_object *obj) +{ + struct qaic_bo *bo = to_qaic_bo(obj); + + drm_printf_indent(p, indent, "user requested size=%llu\n", bo->size); +} + +static const struct vm_operations_struct drm_vm_ops = { + .open = drm_gem_vm_open, + .close = drm_gem_vm_close, +}; + +static int qaic_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) +{ + struct qaic_bo *bo = to_qaic_bo(obj); + unsigned long offset = 0; + struct scatterlist *sg; + int ret; + + if (obj->import_attach) { + trace_qaic_mmap_err(bo->dbc->qdev, "mmap is not supported for import/PRIME buffers", ret); + return -EINVAL; + } + + for (sg = bo->sgt->sgl; sg; sg = sg_next(sg)) { + if (sg_page(sg)) { + ret = remap_pfn_range(vma, vma->vm_start + offset, + page_to_pfn(sg_page(sg)), + sg->length, vma->vm_page_prot); + if (ret) + goto out; + offset += sg->length; + } + } + +out: + return ret; +} + +static void qaic_free_object(struct drm_gem_object *obj) +{ + struct qaic_bo *bo = to_qaic_bo(obj); + + if (obj->import_attach) { + /* DMABUF/PRIME Path */ + dma_buf_detach(obj->import_attach->dmabuf, obj->import_attach); + dma_buf_put(obj->import_attach->dmabuf); + } else { + /* Private buffer allocation path */ + qaic_free_sgt(bo->sgt); + } + + drm_gem_object_release(obj); + kfree(bo); +} + +static const struct drm_gem_object_funcs qaic_gem_funcs = { + .free = qaic_free_object, + .print_info = qaic_gem_print_info, + .mmap = qaic_gem_object_mmap, + .vm_ops = &drm_vm_ops, +}; + +static struct qaic_bo *qaic_alloc_init_bo(void) +{ + struct qaic_bo *bo; + + bo = kzalloc(sizeof(*bo), GFP_KERNEL); + if (!bo) { + trace_qaic_mem_err(bo->dbc->qdev, "Failed to allocate qaic BO", -ENOMEM); + return ERR_PTR(-ENOMEM); + } + + INIT_LIST_HEAD(&bo->slices); + init_completion(&bo->xfer_done); + complete_all(&bo->xfer_done); + + return bo; +} + +int qaic_create_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct qaic_create_bo *args = data; + int usr_rcu_id, qdev_rcu_id; + struct drm_gem_object *obj; + struct qaic_device *qdev; + struct qaic_user *usr; + struct qaic_bo *bo; + size_t size; + int ret; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + trace_qaic_mem_err(qdev, "Failed to acquire user RCU lock", ret); + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + trace_qaic_mem_err(qdev, "Failed to acquire device RCU lock", ret); + goto unlock_dev_srcu; + } + + size = PAGE_ALIGN(args->size); + if (size == 0) { + ret = -EINVAL; + trace_qaic_mem_err_1(qdev, "Failed to PAGE_ALIGN for given buffer size", + "buffer size(B)", ret, args->size); + goto unlock_dev_srcu; + } + + bo = qaic_alloc_init_bo(); + if (IS_ERR(bo)) { + ret = PTR_ERR(bo); + trace_qaic_mem_err(qdev, "Failed to Allocate/Init BO", ret); + goto unlock_dev_srcu; + } + obj = &bo->base; + + drm_gem_private_object_init(dev, obj, size); + + obj->funcs = &qaic_gem_funcs; + ret = create_sgt(qdev, &bo->sgt, size); + if (ret) { + trace_qaic_mem_err(qdev, "Failed to Create SGT", ret); + goto free_bo; + } + + bo->size = args->size; + + ret = drm_gem_handle_create(file_priv, obj, &args->handle); + if (ret) { + trace_qaic_mem_err(qdev, "Failed to Create SGT", ret); + goto free_sgt; + } + + bo->handle = args->handle; + drm_gem_object_put(obj); + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + + return 0; + +free_sgt: + qaic_free_sgt(bo->sgt); +free_bo: + kfree(bo); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +int qaic_mmap_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct qaic_mmap_bo *args = data; + int usr_rcu_id, qdev_rcu_id; + struct drm_gem_object *obj; + struct qaic_device *qdev; + struct qaic_user *usr; + int ret; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + trace_qaic_mmap_err(qdev, "Failed to acquire user RCU lock", ret); + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + trace_qaic_mmap_err(qdev, "Failed to acquire device RCU lock", ret); + goto unlock_dev_srcu; + } + + obj = drm_gem_object_lookup(file_priv, args->handle); + if (!obj) { + ret = -ENOENT; + trace_qaic_mmap_err_1(qdev, "Invalid BO handle passed", "BO handle", + ret, args->handle); + goto unlock_dev_srcu; + } + + ret = drm_gem_create_mmap_offset(obj); + if (ret == 0) + args->offset = drm_vma_node_offset_addr(&obj->vma_node); + + drm_gem_object_put(obj); + +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +struct drm_gem_object *qaic_gem_prime_import(struct drm_device *dev, + struct dma_buf *dma_buf) +{ + struct dma_buf_attachment *attach; + struct drm_gem_object *obj; + struct qaic_bo *bo; + size_t size; + int ret; + + bo = qaic_alloc_init_bo(); + if (IS_ERR(bo)) { + ret = PTR_ERR(bo); + trace_qaic_mem_err(bo->dbc->qdev, "Failed to Allocate/Init BO", ret); + goto out; + } + + obj = &bo->base; + get_dma_buf(dma_buf); + + attach = dma_buf_attach(dma_buf, dev->dev); + if (IS_ERR(attach)) { + ret = PTR_ERR(attach); + trace_qaic_mem_err(bo->dbc->qdev, "Failed to attach dma_buf", ret); + goto attach_fail; + } + + size = PAGE_ALIGN(attach->dmabuf->size); + if (size == 0) { + ret = -EINVAL; + trace_qaic_mem_err(bo->dbc->qdev, "Invalid dma_buf size 0", ret); + goto size_align_fail; + } + + drm_gem_private_object_init(dev, obj, size); + /* + * I have skipped dma_buf_map_attachment() as we do not know the direction just yet. + * Once the direction is know in the subsequent IOCTL to attach slicing we can do it then. + */ + + obj->funcs = &qaic_gem_funcs; + obj->import_attach = attach; + obj->resv = dma_buf->resv; + + return obj; + +size_align_fail: + dma_buf_detach(dma_buf, attach); +attach_fail: + dma_buf_put(dma_buf); + kfree(bo); +out: + return ERR_PTR(ret); +} + +static int qaic_prepare_import_bo(struct qaic_bo *bo, + struct qaic_attach_slice_hdr *hdr) +{ + struct drm_gem_object *obj = &bo->base; + struct sg_table *sgt; + int ret; + + if (obj->import_attach->dmabuf->size < hdr->size) { + trace_qaic_attach_err_2(bo->dbc->qdev, "Invalid import/PRIME buffer size", + "DMABUF size", "Requested buffer size", + ret, obj->import_attach->dmabuf->size, + hdr->size); + return -EINVAL; + } + + sgt = dma_buf_map_attachment(obj->import_attach, hdr->dir); + if (IS_ERR(sgt)) { + ret = PTR_ERR(sgt); + trace_qaic_attach_err(bo->dbc->qdev, "DMABUF map attachment failed", ret); + return ret; + } + + bo->sgt = sgt; + bo->size = hdr->size; + + return 0; +} + +static int qaic_prepare_export_bo(struct qaic_device *qdev, struct qaic_bo *bo, + struct qaic_attach_slice_hdr *hdr) +{ + int ret; + + if (bo->size != hdr->size) { + trace_qaic_attach_err_2(qdev, "Invalid export buffer size", + "DMABUF size", "Requested buffer size", + -EINVAL, bo->size, hdr->size); + return -EINVAL; + } + + ret = dma_map_sgtable(&qdev->pdev->dev, bo->sgt, hdr->dir, 0); + if (ret) { + trace_qaic_attach_err(qdev, "DMA map sgtable failed", ret); + return -EFAULT; + } + + return 0; +} + +static int qaic_prepare_bo(struct qaic_device *qdev, struct qaic_bo *bo, + struct qaic_attach_slice_hdr *hdr) +{ + int ret; + + if (bo->base.import_attach) + ret = qaic_prepare_import_bo(bo, hdr); + else + ret = qaic_prepare_export_bo(qdev, bo, hdr); + + if (ret == 0) + bo->dir = hdr->dir; + + return ret; +} + +static void qaic_unprepare_import_bo(struct qaic_bo *bo) +{ + dma_buf_unmap_attachment(bo->base.import_attach, bo->sgt, bo->dir); + bo->sgt = NULL; + bo->size = 0; +} + +static void qaic_unprepare_export_bo(struct qaic_device *qdev, struct qaic_bo *bo) +{ + dma_unmap_sgtable(&qdev->pdev->dev, bo->sgt, bo->dir, 0); +} + +static void qaic_unprepare_bo(struct qaic_device *qdev, struct qaic_bo *bo) +{ + if (bo->base.import_attach) + qaic_unprepare_import_bo(bo); + else + qaic_unprepare_export_bo(qdev, bo); + + bo->dir = 0; +} + +static void qaic_free_slices_bo(struct qaic_bo *bo) +{ + struct bo_slice *slice, *temp; + + list_for_each_entry_safe(slice, temp, &bo->slices, slice) { + kref_put(&slice->ref_count, free_slice); + } +} + +static int qaic_attach_slicing_bo(struct qaic_device *qdev, + struct qaic_bo *bo, + struct qaic_attach_slice_hdr *hdr, + struct qaic_attach_slice_entry *slice_ent) +{ + int ret, i; + + for (i = 0; i < hdr->count; i++) { + ret = qaic_map_one_slice(qdev, bo, &slice_ent[i]); + if (ret) { + qaic_free_slices_bo(bo); + return ret; + } + } + + if (bo->total_slice_nents > qdev->dbc[hdr->dbc_id].nelem) { + trace_qaic_attach_err(qdev, "DMA map sg failed", ret); + qaic_free_slices_bo(bo); + return -ENOSPC; + } + + bo->sliced = true; + bo->nr_slice = hdr->count; + list_add_tail(&bo->bo_list, &qdev->dbc[hdr->dbc_id].bo_lists); + + return 0; +} + +int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct qaic_attach_slice_entry *slice_ent; + struct qaic_attach_slice *args = data; + struct dma_bridge_chan *dbc; + int usr_rcu_id, qdev_rcu_id; + struct drm_gem_object *obj; + struct qaic_device *qdev; + unsigned long arg_size; + struct qaic_user *usr; + u8 __user *user_data; + struct qaic_bo *bo; + int ret; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + trace_qaic_attach_err(qdev, "Failed to acquire user RCU lock", ret); + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + trace_qaic_attach_err(qdev, "Failed to acquire device RCU lock", ret); + goto unlock_dev_srcu; + } + + if (args->hdr.count == 0) { + ret = -EINVAL; + trace_qaic_attach_err(qdev, "Invalid slice count 0", ret); + goto unlock_dev_srcu; + } + + arg_size = args->hdr.count * sizeof(*slice_ent); + if (arg_size / args->hdr.count != sizeof(*slice_ent)) { + ret = -EINVAL; + trace_qaic_attach_err_1(qdev, "Invalid slice count", + "Slice count", ret, args->hdr.count); + goto unlock_dev_srcu; + } + + if (args->hdr.dbc_id >= qdev->num_dbc) { + ret = -EINVAL; + trace_qaic_attach_err_1(qdev, "Invalid DBC ID", "DBC ID", ret, + args->hdr.dbc_id); + goto unlock_dev_srcu; + } + + if (args->hdr.size == 0) { + ret = -EINVAL; + trace_qaic_attach_err(qdev, "Invalid BO size 0", ret); + goto unlock_dev_srcu; + } + + if (!(args->hdr.dir == DMA_TO_DEVICE || + args->hdr.dir == DMA_FROM_DEVICE)) { + ret = -EINVAL; + trace_qaic_attach_err_1(qdev, "Invalid DMA direction", + "DMA directions", ret, args->hdr.dir); + goto unlock_dev_srcu; + } + + dbc = &qdev->dbc[args->hdr.dbc_id]; + if (dbc->usr != usr) { + ret = -EINVAL; + trace_qaic_attach_err_1(qdev, "User handle mismatch", "DBC ID", + ret, args->hdr.dbc_id); + goto unlock_dev_srcu; + } + + if (args->data == 0) { + ret = -EINVAL; + trace_qaic_attach_err(qdev, "Invalid data pointer (NULL).", ret); + goto unlock_dev_srcu; + } + + user_data = u64_to_user_ptr(args->data); + + slice_ent = kzalloc(arg_size, GFP_KERNEL); + if (!slice_ent) { + ret = -EINVAL; + trace_qaic_attach_err_1(qdev, "Failed to allocate memory for slice entries", + "Number of slice", ret, args->hdr.count); + goto unlock_dev_srcu; + } + + ret = copy_from_user(slice_ent, user_data, arg_size); + if (ret) { + ret = -EFAULT; + trace_qaic_attach_err(qdev, "Failed to copy data from user to kernel", ret); + goto free_slice_ent; + } + + ret = qaic_validate_req(qdev, slice_ent, args->hdr.count, args->hdr.size); + if (ret) + goto free_slice_ent; + + obj = drm_gem_object_lookup(file_priv, args->hdr.handle); + if (!obj) { + trace_qaic_attach_err_1(qdev, "Invalid BO handle", "BO handle", + ret, args->hdr.handle); + ret = -ENOENT; + goto free_slice_ent; + } + + bo = to_qaic_bo(obj); + + ret = qaic_prepare_bo(qdev, bo, &args->hdr); + if (ret) + goto put_bo; + + ret = qaic_attach_slicing_bo(qdev, bo, &args->hdr, slice_ent); + if (ret) + goto unprepare_bo; + + if (args->hdr.dir == DMA_TO_DEVICE) + dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, args->hdr.dir); + + bo->dbc = dbc; + drm_gem_object_put(obj); + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + + return 0; + +unprepare_bo: + qaic_unprepare_bo(qdev, bo); +put_bo: + drm_gem_object_put(obj); +free_slice_ent: + kfree(slice_ent); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +static inline int copy_exec_reqs(struct qaic_device *qdev, + struct bo_slice *slice, u32 dbc_id, u32 head, + u32 *ptail) +{ + struct dma_bridge_chan *dbc = &qdev->dbc[dbc_id]; + struct dbc_req *reqs = slice->reqs; + u32 tail = *ptail; + u32 avail; + + avail = head - tail; + if (head <= tail) + avail += dbc->nelem; + + --avail; + + if (avail < slice->nents) { + trace_qaic_exec_err_2(qdev, "No enough resources to execute this BO slice", + "resource available", "resource needed", + -EAGAIN, avail, slice->nents); + return -EAGAIN; + } + + if (tail + slice->nents > dbc->nelem) { + avail = dbc->nelem - tail; + avail = min_t(u32, avail, slice->nents); + memcpy(dbc->req_q_base + tail * get_dbc_req_elem_size(), + reqs, sizeof(*reqs) * avail); + reqs += avail; + avail = slice->nents - avail; + if (avail) + memcpy(dbc->req_q_base, reqs, sizeof(*reqs) * avail); + } else { + memcpy(dbc->req_q_base + tail * get_dbc_req_elem_size(), + reqs, sizeof(*reqs) * slice->nents); + } + + *ptail = (tail + slice->nents) % dbc->nelem; + + return 0; +} + +/* + * Based on the value of resize we may only need to transmit first_n + * entries and the last entry, with last_bytes to send from the last entry. + * Note that first_n could be 0. + */ +static inline int copy_partial_exec_reqs(struct qaic_device *qdev, + struct bo_slice *slice, + u64 resize, u32 dbc_id, + u32 head, u32 *ptail) +{ + struct dma_bridge_chan *dbc = &qdev->dbc[dbc_id]; + struct dbc_req *reqs = slice->reqs; + struct dbc_req *last_req; + u32 tail = *ptail; + u64 total_bytes; + u64 last_bytes; + u32 first_n; + u32 avail; + int ret; + int i; + + avail = head - tail; + if (head <= tail) + avail += dbc->nelem; + + --avail; + + total_bytes = 0; + for (i = 0; i < slice->nents; i++) { + total_bytes += le32_to_cpu(reqs[i].len); + if (total_bytes >= resize) + break; + } + + if (total_bytes < resize) { + /* User space should have used the full buffer path. */ + ret = -EINVAL; + trace_qaic_exec_err_2(qdev, "Resize too big for partial buffer", + "partial/full size of BO slice", + "slice resize", ret, total_bytes, resize); + return ret; + } + + first_n = i; + last_bytes = i ? resize + le32_to_cpu(reqs[i].len) - total_bytes : resize; + + if (avail < (first_n + 1)) { + trace_qaic_exec_err_2(qdev, "Not enough resources to execute this BO slice", + "resource available", "resource needed", + -EAGAIN, avail, first_n + 1); + return -EAGAIN; + } + + if (first_n) { + if (tail + first_n > dbc->nelem) { + avail = dbc->nelem - tail; + avail = min_t(u32, avail, first_n); + memcpy(dbc->req_q_base + tail * get_dbc_req_elem_size(), + reqs, sizeof(*reqs) * avail); + last_req = reqs + avail; + avail = first_n - avail; + if (avail) + memcpy(dbc->req_q_base, last_req, + sizeof(*reqs) * avail); + } else { + memcpy(dbc->req_q_base + tail * get_dbc_req_elem_size(), + reqs, sizeof(*reqs) * first_n); + } + } + + /* Copy over the last entry. Here we need to adjust len to the left over + * size, and set src and dst to the entry it is copied to. + */ + last_req = dbc->req_q_base + + (tail + first_n) % dbc->nelem * get_dbc_req_elem_size(); + memcpy(last_req, reqs + slice->nents - 1, sizeof(*reqs)); + + /* + * last_bytes holds size of a DMA segment, maximum DMA segment size is + * set to UINT_MAX by qaic and hence last_bytes can never exceed u32 + * range. So, by down sizing we are not corrupting the value. + */ + last_req->len = cpu_to_le32((u32)last_bytes); + last_req->src_addr = reqs[first_n].src_addr; + last_req->dest_addr = reqs[first_n].dest_addr; + + *ptail = (tail + first_n + 1) % dbc->nelem; + + return 0; +} + +static int __qaic_execute_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv, bool is_partial) +{ + struct qaic_partial_execute_entry *pexec; + struct qaic_execute *args = data; + struct qaic_execute_entry *exec; + struct dma_bridge_chan *dbc; + int usr_rcu_id, qdev_rcu_id; + struct drm_gem_object *obj; + struct qaic_device *qdev; + struct bo_slice *slice; + struct qaic_user *usr; + u8 __user *user_data; + unsigned long flags; + u64 received_ts = 0; + u32 queue_level = 0; + struct qaic_bo *bo; + u64 submit_ts = 0; + unsigned long n; + bool queued; + int ret = 0; + int dbc_id; + int rcu_id; + u32 head; + u32 tail; + u64 size; + int i, j; + + received_ts = ktime_get_ns(); + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + trace_qaic_exec_err(qdev, "Failed to acquire user RCU lock", ret); + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + trace_qaic_exec_err(qdev, "Failed to acquire device RCU lock", ret); + goto unlock_dev_srcu; + } + + if (args->hdr.dbc_id >= qdev->num_dbc) { + ret = -EINVAL; + trace_qaic_exec_err_1(qdev, "Invalid DBC ID", "DBC ID", ret, args->hdr.dbc_id); + goto unlock_dev_srcu; + } + + dbc_id = args->hdr.dbc_id; + dbc = &qdev->dbc[dbc_id]; + + size = is_partial ? sizeof(*pexec) : sizeof(*exec); + + n = (unsigned long)size * args->hdr.count; + if (args->hdr.count == 0 || n / args->hdr.count != size) { + ret = -EINVAL; + trace_qaic_exec_err_1(qdev, "Invalid number of execute requests", + "execute count", ret, args->hdr.count); + goto unlock_dev_srcu; + } + + user_data = u64_to_user_ptr(args->data); + + exec = kcalloc(args->hdr.count, size, GFP_KERNEL); + pexec = (struct qaic_partial_execute_entry *)exec; + if (!exec) { + ret = -ENOMEM; + trace_qaic_exec_err_1(qdev, "Failed to allocate execute entry structure", + "execute count", ret, args->hdr.count); + goto unlock_dev_srcu; + } + + if (copy_from_user(exec, user_data, n)) { + ret = -EFAULT; + trace_qaic_exec_err(qdev, "Failed to copy data from user to kernel", ret); + goto free_exec; + } + + rcu_id = srcu_read_lock(&dbc->ch_lock); + if (!dbc->usr || dbc->usr->handle != usr->handle) { + ret = -EPERM; + trace_qaic_exec_err_1(qdev, "User handle mismatch", "DBC ID", ret, dbc_id); + goto release_ch_rcu; + } + + if (dbc->in_ssr) { + ret = -EPIPE; + trace_qaic_exec_err(qdev, "In SSR", ret); + goto release_ch_rcu; + } + + head = readl(dbc->dbc_base + REQHP_OFF); + tail = readl(dbc->dbc_base + REQTP_OFF); + + if (head == U32_MAX || tail == U32_MAX) { + /* PCI link error */ + ret = -ENODEV; + trace_qaic_exec_err(qdev, "Failed to read HW head pointer and tail pointer", ret); + goto release_ch_rcu; + } + + queue_level = head <= tail ? tail - head : dbc->nelem - (head - tail); + + for (i = 0; i < args->hdr.count; i++) { + /* + * ref count will be decemented when the transfer of this + * buffer is complete. It is inside dbc_irq_threaded_fn(). + */ + obj = drm_gem_object_lookup(file_priv, + is_partial ? pexec[i].handle : exec[i].handle); + if (!obj) { + ret = -ENOENT; + trace_qaic_exec_err_2(qdev, "Invalid BO handle provided", + "BO handle", "execute index", + ret, is_partial ? pexec[i].handle : + exec[i].handle, i); + goto sync_to_cpu; + } + + bo = to_qaic_bo(obj); + + if (!bo->sliced) { + ret = -EINVAL; + trace_qaic_exec_err_1(qdev, "Slicing information is not attached to BO", + "BO Handle", ret, bo->handle); + goto sync_to_cpu; + } + + if (is_partial && pexec[i].resize > bo->size) { + ret = -EINVAL; + trace_qaic_exec_err_2(qdev, "Resize value too large for partial execute IOCTL", + "BO size", "Resize", + ret, bo->size, pexec[i].resize); + goto sync_to_cpu; + } + + spin_lock_irqsave(&dbc->xfer_lock, flags); + queued = bo->queued; + bo->queued = true; + if (queued) { + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + ret = -EINVAL; + trace_qaic_exec_err_1(qdev, "BO is already queued", + "BO handle", ret, bo->handle); + goto sync_to_cpu; + } + + bo->req_id = dbc->next_req_id++; + + list_for_each_entry(slice, &bo->slices, slice) { + /* + * If this slice does not falls under the given + * resize then skip this slice and continue the loop + */ + if (is_partial && pexec[i].resize && + pexec[i].resize <= slice->offset) + continue; + + for (j = 0; j < slice->nents; j++) + slice->reqs[j].req_id = cpu_to_le16(bo->req_id); + + /* + * If it is a partial execute ioctl call then check if + * resize has cut this slice short then do a partial copy + * else do complete copy + */ + if (is_partial && pexec[i].resize && + pexec[i].resize < slice->offset + slice->size) + ret = copy_partial_exec_reqs(qdev, slice, + pexec[i].resize - slice->offset, + dbc_id, head, &tail); + else + ret = copy_exec_reqs(qdev, slice, dbc_id, head, &tail); + if (ret) { + bo->queued = false; + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + goto sync_to_cpu; + } + } + reinit_completion(&bo->xfer_done); + list_add_tail(&bo->xfer_list, &dbc->xfer_list); + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + dma_sync_sgtable_for_device(&qdev->pdev->dev, bo->sgt, bo->dir); + } + + submit_ts = ktime_get_ns(); + writel(tail, dbc->dbc_base + REQTP_OFF); + + /* Collect kernel Profiling data */ + for (i = 0; i < args->hdr.count; i++) { + /* + * Since we already committed the BO to hardware, the only way + * this should fail is a pending signal. We can't cancel the + * submit to hardware, so we have to just skip the profiling + * data. In case the signal is not fatal to the process, we + * return success so that the user doesn't try to resubmit. + */ + obj = drm_gem_object_lookup(file_priv, + is_partial ? pexec[i].handle : exec[i].handle); + if (!obj) { + trace_qaic_exec_err_2(qdev, "Invalid BO handle provided", + "BO handle", "execute index", + ret, is_partial ? pexec[i].handle : + exec[i].handle, i); + break; + } + bo = to_qaic_bo(obj); + bo->perf_stats.req_received_ts = received_ts; + bo->perf_stats.req_submit_ts = submit_ts; + bo->perf_stats.queue_level_before = queue_level; + queue_level += bo->total_slice_nents; + drm_gem_object_put(obj); + } + + if (poll_datapath) + schedule_work(&dbc->poll_work); + + goto release_ch_rcu; + +sync_to_cpu: + if (likely(obj)) + drm_gem_object_put(obj); + for (j = 0; j < i; j++) { + spin_lock_irqsave(&dbc->xfer_lock, flags); + bo = list_last_entry(&dbc->xfer_list, struct qaic_bo, + xfer_list); + obj = &bo->base; + bo->queued = false; + list_del(&bo->xfer_list); + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, bo->dir); + /* Release ref to BO */ + drm_gem_object_put(obj); + } +release_ch_rcu: + srcu_read_unlock(&dbc->ch_lock, rcu_id); +free_exec: + kfree(exec); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +int qaic_execute_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + return __qaic_execute_bo_ioctl(dev, data, file_priv, false); +} + +int qaic_partial_execute_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + return __qaic_execute_bo_ioctl(dev, data, file_priv, true); +} + +/* + * Our interrupt handling is a bit more complicated than a simple ideal, but + * sadly necessary. + * + * Each dbc has a completion queue. Entries in the queue correspond to DMA + * requests which the device has processed. The hardware already has a built + * in irq mitigation. When the device puts an entry into the queue, it will + * only trigger an interrupt if the queue was empty. Therefore, when adding + * the Nth event to a non-empty queue, the hardware doesn't trigger an + * interrupt. This means the host doesn't get additional interrupts signaling + * the same thing - the queue has something to process. + * This behavior can be overridden in the DMA request. + * This means that when the host receives an interrupt, it is required to + * drain the queue. + * + * This behavior is what NAPI attempts to accomplish, although we can't use + * NAPI as we don't have a netdev. We use threaded irqs instead. + * + * However, there is a situation where the host drains the queue fast enough + * that every event causes an interrupt. Typically this is not a problem as + * the rate of events would be low. However, that is not the case with + * lprnet for example. On an Intel Xeon D-2191 where we run 8 instances of + * lprnet, the host receives roughly 80k interrupts per second from the device + * (per /proc/interrupts). While NAPI documentation indicates the host should + * just chug along, sadly that behavior causes instability in some hosts. + * + * Therefore, we implement an interrupt disable scheme similar to NAPI. The + * key difference is that we will delay after draining the queue for a small + * time to allow additional events to come in via polling. Using the above + * lprnet workload, this reduces the number of interrupts processed from + * ~80k/sec to about 64 in 5 minutes and appears to solve the system + * instability. + */ +irqreturn_t dbc_irq_handler(int irq, void *data) +{ + struct dma_bridge_chan *dbc = data; + int rcu_id; + u32 head; + u32 tail; + + rcu_id = srcu_read_lock(&dbc->ch_lock); + + if (!dbc->usr) { + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_HANDLED; + } + + head = readl(dbc->dbc_base + RSPHP_OFF); + if (head == U32_MAX) { /* PCI link error */ + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_NONE; + } + + tail = readl(dbc->dbc_base + RSPTP_OFF); + if (tail == U32_MAX) { /* PCI link error */ + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_NONE; + } + + if (head == tail) { /* queue empty */ + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_NONE; + } + + disable_irq_nosync(irq); + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_WAKE_THREAD; +} + +void irq_polling_work(struct work_struct *work) +{ + struct dma_bridge_chan *dbc = container_of(work, + struct dma_bridge_chan, + poll_work); + unsigned long flags; + int rcu_id; + u32 head; + u32 tail; + + rcu_id = srcu_read_lock(&dbc->ch_lock); + + while (1) { + if (dbc->qdev->in_reset) { + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + if (!dbc->usr) { + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + spin_lock_irqsave(&dbc->xfer_lock, flags); + if (list_empty(&dbc->xfer_list)) { + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + + head = readl(dbc->dbc_base + RSPHP_OFF); + if (head == U32_MAX) { /* PCI link error */ + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + + tail = readl(dbc->dbc_base + RSPTP_OFF); + if (tail == U32_MAX) { /* PCI link error */ + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + + if (head != tail) { + irq_wake_thread(dbc->irq, dbc); + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return; + } + + cond_resched(); + usleep_range(datapath_poll_interval_us, + 2 * datapath_poll_interval_us); + } +} + +irqreturn_t dbc_irq_threaded_fn(int irq, void *data) +{ + struct dma_bridge_chan *dbc = data; + int event_count = NUM_EVENTS; + int delay_count = NUM_DELAYS; + struct qaic_device *qdev; + struct qaic_bo *bo, *i; + struct dbc_rsp *rsp; + unsigned long flags; + int rcu_id; + u16 status; + u16 req_id; + u32 head; + u32 tail; + + rcu_id = srcu_read_lock(&dbc->ch_lock); + + head = readl(dbc->dbc_base + RSPHP_OFF); + if (head == U32_MAX) /* PCI link error */ + goto error_out; + + qdev = dbc->qdev; +read_fifo: + + if (!event_count) { + event_count = NUM_EVENTS; + cond_resched(); + } + + /* + * if this channel isn't assigned or gets unassigned during processing + * we have nothing further to do + */ + if (!dbc->usr) + goto error_out; + + tail = readl(dbc->dbc_base + RSPTP_OFF); + if (tail == U32_MAX) /* PCI link error */ + goto error_out; + + if (head == tail) { /* queue empty */ + if (delay_count) { + --delay_count; + usleep_range(100, 200); + goto read_fifo; /* check for a new event */ + } + goto normal_out; + } + + delay_count = NUM_DELAYS; + while (head != tail) { + if (!event_count) + break; + --event_count; + rsp = dbc->rsp_q_base + head * sizeof(*rsp); + req_id = le16_to_cpu(rsp->req_id); + status = le16_to_cpu(rsp->status); + if (status) + pci_dbg(qdev->pdev, "req_id %d failed with status %d\n", + req_id, status); + spin_lock_irqsave(&dbc->xfer_lock, flags); + /* + * A BO can receive multiple interrupts, since a BO can be + * divided into multiple slices and a buffer receives as many + * interrupts as slices. So until it receives interrupts for + * all the slices we cannot mark that buffer complete. + */ + list_for_each_entry_safe(bo, i, &dbc->xfer_list, xfer_list) { + if (bo->req_id == req_id) + bo->nr_slice_xfer_done++; + else + continue; + + if (bo->nr_slice_xfer_done < bo->nr_slice) + break; + + /* + * At this point we have received all the interrupts for + * BO, which means BO execution is complete. + */ + dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, bo->dir); + bo->nr_slice_xfer_done = 0; + bo->queued = false; + list_del(&bo->xfer_list); + bo->perf_stats.req_processed_ts = ktime_get_ns(); + complete_all(&bo->xfer_done); + drm_gem_object_put(&bo->base); + break; + } + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + head = (head + 1) % dbc->nelem; + } + + /* + * Update the head pointer of response queue and let the device know + * that we have consumed elements from the queue. + */ + writel(head, dbc->dbc_base + RSPHP_OFF); + + /* elements might have been put in the queue while we were processing */ + goto read_fifo; + +normal_out: + if (likely(!poll_datapath)) + enable_irq(irq); + else + schedule_work(&dbc->poll_work); + /* checking the fifo and enabling irqs is a race, missed event check */ + tail = readl(dbc->dbc_base + RSPTP_OFF); + if (tail != U32_MAX && head != tail) { + if (likely(!poll_datapath)) + disable_irq_nosync(irq); + goto read_fifo; + } + srcu_read_unlock(&dbc->ch_lock, rcu_id); + return IRQ_HANDLED; + +error_out: + srcu_read_unlock(&dbc->ch_lock, rcu_id); + if (likely(!poll_datapath)) + enable_irq(irq); + else + schedule_work(&dbc->poll_work); + + return IRQ_HANDLED; +} + +int qaic_wait_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct qaic_wait *args = data; + int usr_rcu_id, qdev_rcu_id; + struct dma_bridge_chan *dbc; + struct drm_gem_object *obj; + struct qaic_device *qdev; + unsigned long timeout; + struct qaic_user *usr; + struct qaic_bo *bo; + int rcu_id; + int ret; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + trace_qaic_wait_err(qdev, "Failed to acquire user RCU lock", ret); + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + trace_qaic_wait_err(qdev, "Failed to acquire device RCU lock", ret); + goto unlock_dev_srcu; + } + + if (args->pad != 0) { + ret = -EINVAL; + trace_qaic_wait_err(qdev, "Pad value is non-zero", ret); + goto unlock_dev_srcu; + } + + if (args->dbc_id >= qdev->num_dbc) { + ret = -EINVAL; + trace_qaic_wait_err_1(qdev, "Invalid DBC ID", "DBC ID", ret, args->dbc_id); + goto unlock_dev_srcu; + } + + dbc = &qdev->dbc[args->dbc_id]; + + rcu_id = srcu_read_lock(&dbc->ch_lock); + if (dbc->usr != usr) { + ret = -EPERM; + trace_qaic_wait_err_1(qdev, "Mismatch user handle", "DBC ID", ret, args->dbc_id); + goto unlock_ch_srcu; + } + + if (dbc->in_ssr) { + ret = -EPIPE; + trace_qaic_wait_err(qdev, "In SSR", ret); + goto unlock_ch_srcu; + } + + obj = drm_gem_object_lookup(file_priv, args->handle); + if (!obj) { + ret = -ENOENT; + trace_qaic_wait_err_1(qdev, "Invalid BO handle", "handle", ret, args->handle); + goto unlock_ch_srcu; + } + + bo = to_qaic_bo(obj); + timeout = args->timeout ? args->timeout : wait_exec_default_timeout; + timeout = msecs_to_jiffies(timeout); + ret = wait_for_completion_interruptible_timeout(&bo->xfer_done, timeout); + if (!ret) { + ret = -ETIMEDOUT; + trace_qaic_wait_err_1(qdev, "Wait timeout", "timeout", ret, + jiffies_to_msecs(timeout)); + goto put_obj; + } + if (ret > 0) + ret = 0; + + if (!dbc->usr) { + ret = -EPERM; + trace_qaic_wait_err(qdev, "User disappeared", ret); + } else if (dbc->in_ssr) { + /* + * While waiting for this buffer transaction, it is possible + * that SSR was triggered on this DBC. Thus we flushed all + * buffers on this DBC in transfer queue and marked them as + * complete. Therefore, return an error as this buffer + * transaction failed. + */ + ret = -EPIPE; + trace_qaic_wait_err(qdev, "In SSR", ret); + } + +put_obj: + drm_gem_object_put(obj); +unlock_ch_srcu: + srcu_read_unlock(&dbc->ch_lock, rcu_id); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +int qaic_perf_stats_bo_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct qaic_perf_stats_entry *ent = NULL; + struct qaic_perf_stats *args = data; + int usr_rcu_id, qdev_rcu_id; + struct drm_gem_object *obj; + struct qaic_device *qdev; + struct qaic_user *usr; + struct qaic_bo *bo; + int ret, i; + + usr = file_priv->driver_priv; + usr_rcu_id = srcu_read_lock(&usr->qddev_lock); + if (!usr->qddev) { + ret = -ENODEV; + trace_qaic_stats_err(qdev, "Failed to acquire user RCU lock", ret); + goto unlock_usr_srcu; + } + + qdev = usr->qddev->qdev; + qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + trace_qaic_stats_err(qdev, "Failed to acquire device RCU lock", ret); + goto unlock_dev_srcu; + } + + if (args->hdr.dbc_id >= qdev->num_dbc) { + ret = -EINVAL; + trace_qaic_stats_err_1(qdev, "Invalid DBC ID", "DBC ID", ret, args->hdr.dbc_id); + goto unlock_dev_srcu; + } + + ent = kcalloc(args->hdr.count, sizeof(*ent), GFP_KERNEL); + if (!ent) { + ret = -EINVAL; + trace_qaic_stats_err_1(qdev, "Failed to allocate memory for perf stats structure", + "query count", ret, args->hdr.count); + goto unlock_dev_srcu; + } + + ret = copy_from_user(ent, u64_to_user_ptr(args->data), + args->hdr.count * sizeof(*ent)); + if (ret) { + ret = -EFAULT; + trace_qaic_stats_err(qdev, "Failed to copy data from user to kernel", ret); + goto free_ent; + } + + for (i = 0; i < args->hdr.count; i++) { + obj = drm_gem_object_lookup(file_priv, ent[i].handle); + if (!obj) { + ret = -ENOENT; + trace_qaic_stats_err_1(qdev, "Invalid BO handle", + "BO handle", ret, ent[i].handle); + goto free_ent; + } + bo = to_qaic_bo(obj); + /* + * perf stats ioctl is called before wait ioctl is complete then + * the latency information is invalid. + */ + if (bo->perf_stats.req_processed_ts < bo->perf_stats.req_submit_ts) { + ent[i].device_latency_us = 0; + } else { + ent[i].device_latency_us = (bo->perf_stats.req_processed_ts - + bo->perf_stats.req_submit_ts) / 1000; + } + ent[i].submit_latency_us = (bo->perf_stats.req_submit_ts - + bo->perf_stats.req_received_ts) / 1000; + ent[i].queue_level_before = bo->perf_stats.queue_level_before; + ent[i].num_queue_element = bo->total_slice_nents; + drm_gem_object_put(obj); + } + + if (copy_to_user(u64_to_user_ptr(args->data), ent, + args->hdr.count * sizeof(*ent))) { + ret = -EFAULT; + trace_qaic_stats_err(qdev, "Failed to copy data to user from kernel", ret); + } + +free_ent: + kfree(ent); +unlock_dev_srcu: + srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); +unlock_usr_srcu: + srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); + return ret; +} + +static void empty_xfer_list(struct qaic_device *qdev, struct dma_bridge_chan *dbc) +{ + unsigned long flags; + struct qaic_bo *bo; + + spin_lock_irqsave(&dbc->xfer_lock, flags); + while (!list_empty(&dbc->xfer_list)) { + bo = list_first_entry(&dbc->xfer_list, typeof(*bo), xfer_list); + bo->queued = false; + list_del(&bo->xfer_list); + spin_unlock_irqrestore(&dbc->xfer_lock, flags); + dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, bo->dir); + complete_all(&bo->xfer_done); + drm_gem_object_put(&bo->base); + spin_lock_irqsave(&dbc->xfer_lock, flags); + } + spin_unlock_irqrestore(&dbc->xfer_lock, flags); +} + +int disable_dbc(struct qaic_device *qdev, u32 dbc_id, struct qaic_user *usr) +{ + if (!qdev->dbc[dbc_id].usr || + qdev->dbc[dbc_id].usr->handle != usr->handle) + return -EPERM; + + qdev->dbc[dbc_id].usr = NULL; + synchronize_srcu(&qdev->dbc[dbc_id].ch_lock); + return 0; +} + +/** + * enable_dbc - Enable the DBC. DBCs are disabled by removing the context of + * user. Add user context back to DBC to enable it. This fucntions trusts the + * DBC ID passed and expects the DBC to be disabled. + * @qdev: Qranium device handle + * @dbc_id: ID of the DBC + * @usr: User context + */ +void enable_dbc(struct qaic_device *qdev, u32 dbc_id, struct qaic_user *usr) +{ + qdev->dbc[dbc_id].usr = usr; +} + +void wakeup_dbc(struct qaic_device *qdev, u32 dbc_id) +{ + struct dma_bridge_chan *dbc = &qdev->dbc[dbc_id]; + + dbc->usr = NULL; + empty_xfer_list(qdev, dbc); + synchronize_srcu(&dbc->ch_lock); +} + +void release_dbc(struct qaic_device *qdev, u32 dbc_id, bool set_state) +{ + struct bo_slice *slice, *slice_temp; + struct qaic_bo *bo, *bo_temp; + struct dma_bridge_chan *dbc; + + dbc = &qdev->dbc[dbc_id]; + if (!dbc->in_use) + return; + + wakeup_dbc(qdev, dbc_id); + + dma_free_coherent(&qdev->pdev->dev, dbc->total_size, dbc->req_q_base, + dbc->dma_addr); + dbc->total_size = 0; + dbc->req_q_base = NULL; + dbc->dma_addr = 0; + dbc->nelem = 0; + dbc->usr = NULL; + if (set_state) + set_dbc_state(qdev, dbc_id, DBC_STATE_IDLE); + + list_for_each_entry_safe(bo, bo_temp, &dbc->bo_lists, bo_list) { + list_for_each_entry_safe(slice, slice_temp, &bo->slices, slice) + kref_put(&slice->ref_count, free_slice); + bo->sliced = false; + INIT_LIST_HEAD(&bo->slices); + bo->total_slice_nents = 0; + bo->dir = 0; + bo->dbc = NULL; + bo->nr_slice = 0; + bo->nr_slice_xfer_done = 0; + bo->queued = false; + bo->req_id = 0; + init_completion(&bo->xfer_done); + complete_all(&bo->xfer_done); + list_del(&bo->bo_list); + bo->perf_stats.req_received_ts = 0; + bo->perf_stats.req_submit_ts = 0; + bo->perf_stats.req_processed_ts = 0; + bo->perf_stats.queue_level_before = 0; + } + + dbc->in_use = false; + wake_up(&dbc->dbc_release); +} + +void qaic_data_get_fifo_info(struct dma_bridge_chan *dbc, u32 *head, u32 *tail) +{ + if (!dbc || !head || !tail) + return; + + *head = readl(dbc->dbc_base + REQHP_OFF); + *tail = readl(dbc->dbc_base + REQTP_OFF); +} + +/** + * dbc_enter_ssr - Prepare to enter in sub system reset(SSR) for given DBC ID + * During SSR we cannot support execute ioctl and wait ioctl for the given DBC. + * We control this behaviour using in_ssr flag in DBC. + * @qdev: Qranium device handle + * @dbc_id: ID of the DBC which will enter SSR + */ +void dbc_enter_ssr(struct qaic_device *qdev, u32 dbc_id) +{ + struct dma_bridge_chan *dbc = &qdev->dbc[dbc_id]; + + dbc->in_ssr = true; + empty_xfer_list(qdev, dbc); + synchronize_srcu(&dbc->ch_lock); +} + +/** + * dbc_exit_ssr - Prepare to exit from sub system reset(SSR) for given DBC ID + * After SSR we exit SSR we can resume our supporting execute ioctl and + * wait ioctl. We control this behaviour using in_ssr flag in DBC. + * @qdev: Qranium device handle + * @dbc_id: ID of the DBC which will exit SSR + */ +void dbc_exit_ssr(struct qaic_device *qdev, u32 dbc_id) +{ + qdev->dbc[dbc_id].in_ssr = false; +} From patchwork Mon Aug 15 18:42:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF131C2BB45 for ; Mon, 15 Aug 2022 19:28:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244334AbiHOT2V (ORCPT ); Mon, 15 Aug 2022 15:28:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344966AbiHOT1c (ORCPT ); Mon, 15 Aug 2022 15:27:32 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2ABA85B7A7; Mon, 15 Aug 2022 11:43:21 -0700 (PDT) Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FHRk8Z024549; Mon, 15 Aug 2022 18:43:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=DmWN72iI/qaQegMTzZ+4Tzbka+iv7YD8g0EibM+T+YE=; b=WKrxyPb5G+MsuAh3y4qlopRwwfyURQ/z9mZa9hukaAdixIkJqxgDGIf5xoXAc4dfjCin XXtMYwVhXWz10jx2RgIXkqUzbQ3NcqpTA1FpucA3Ty3BrIYhx+//hYVyjBHZfnR25Ya9 ct/Iyuj4OT+rRj4+nEiuRn1EfjQYlHkSiahGmdpbty0jIqxCiDvJjgVMcoEZhYzddweh CApBlU35W4WFRvR2dxFhbpD+si8MIad0rVbNqGDMcr8Ycb/lt1vhw5J6PqfI+RFc0Vvo MAdwiGlIj2wBfEU7rhAEGoKHyI3bWe9nprh+4L7PIZoC/8VGZbv2NSEiAxCmhWksSzOQ vA== Received: from nalasppmta02.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx58f5yxk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:14 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIhEJM032562 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:14 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:13 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 07/14] drm/qaic: Add debugfs Date: Mon, 15 Aug 2022 12:42:29 -0600 Message-ID: <1660588956-24027-8-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 0LnKbXp5VWd8sD0jLIul60bBS1m0AAyN X-Proofpoint-GUID: 0LnKbXp5VWd8sD0jLIul60bBS1m0AAyN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 phishscore=0 adultscore=0 bulkscore=0 impostorscore=0 mlxscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 priorityscore=1501 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add debugfs entries that dump information about the dma_bridge fifo state and also the SBL boot log. Change-Id: Ib46b84c07c25afcf0ac2c73304cf6275689d002e Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/qaic_debugfs.c | 335 ++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/qaic/qaic_debugfs.h | 33 ++++ 2 files changed, 368 insertions(+) create mode 100644 drivers/gpu/drm/qaic/qaic_debugfs.c create mode 100644 drivers/gpu/drm/qaic/qaic_debugfs.h diff --git a/drivers/gpu/drm/qaic/qaic_debugfs.c b/drivers/gpu/drm/qaic/qaic_debugfs.c new file mode 100644 index 0000000..82478e3 --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_debugfs.c @@ -0,0 +1,335 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2020, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "qaic.h" +#include "qaic_debugfs.h" + +#define BOOTLOG_POOL_SIZE 16 +#define BOOTLOG_MSG_SIZE 512 + +struct bootlog_msg { + /* Buffer for bootlog messages */ + char str[BOOTLOG_MSG_SIZE]; + /* Root struct of device, used to access device resources */ + struct qaic_device *qdev; + /* Work struct to schedule work coming on QAIC_LOGGING channel */ + struct work_struct work; +}; + +struct bootlog_page { + /* Node in list of bootlog pages maintained by root device struct */ + struct list_head node; + /* Total size of the buffer that holds the bootlogs. It is PAGE_SIZE */ + unsigned int size; + /* Offset for the next bootlog */ + unsigned int offset; +}; + +static int bootlog_show(struct seq_file *s, void *data) +{ + struct qaic_device *qdev = s->private; + struct bootlog_page *page; + void *log; + void *page_end; + + mutex_lock(&qdev->bootlog_mutex); + list_for_each_entry(page, &qdev->bootlog, node) { + log = page + 1; + page_end = (void *)page + page->offset; + while (log < page_end) { + seq_printf(s, "%s", (char *)log); + log += strlen(log) + 1; + } + } + mutex_unlock(&qdev->bootlog_mutex); + + return 0; +} + +static int bootlog_open(struct inode *inode, struct file *file) +{ + struct qaic_device *qdev = inode->i_private; + + return single_open(file, bootlog_show, qdev); +} + +static const struct file_operations bootlog_fops = { + .owner = THIS_MODULE, + .open = bootlog_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + +static int read_dbc_fifo_size(void *data, u64 *value) +{ + struct dma_bridge_chan *dbc = (struct dma_bridge_chan *)data; + + *value = dbc->nelem; + return 0; +} + +static int read_dbc_queued(void *data, u64 *value) +{ + struct dma_bridge_chan *dbc = (struct dma_bridge_chan *)data; + u32 tail, head; + + qaic_data_get_fifo_info(dbc, &head, &tail); + + if (head == U32_MAX || tail == U32_MAX) + *value = 0; + else if (head > tail) + *value = dbc->nelem - head + tail; + else + *value = tail - head; + + return 0; +} + +DEFINE_SIMPLE_ATTRIBUTE(dbc_fifo_size_fops, read_dbc_fifo_size, NULL, "%llu\n"); +DEFINE_SIMPLE_ATTRIBUTE(dbc_queued_fops, read_dbc_queued, NULL, "%llu\n"); + +static void qaic_debugfs_add_dbc_entry(struct qaic_device *qdev, uint16_t dbc_id, + struct dentry *parent) +{ + struct dma_bridge_chan *dbc = &qdev->dbc[dbc_id]; + char name[16]; + + snprintf(name, 16, "%s%03u", QAIC_DEBUGFS_DBC_PREFIX, dbc_id); + + dbc->debugfs_root = debugfs_create_dir(name, parent); + + debugfs_create_file(QAIC_DEBUGFS_DBC_FIFO_SIZE, 0444, dbc->debugfs_root, + dbc, &dbc_fifo_size_fops); + + debugfs_create_file(QAIC_DEBUGFS_DBC_QUEUED, 0444, dbc->debugfs_root, + dbc, &dbc_queued_fops); +} + +void qaic_debugfs_init(struct drm_minor *minor) +{ + struct qaic_drm_device *qddev = minor->dev->dev_private; + struct qaic_device *qdev = qddev->qdev; + int i; + + for (i = 0; i < qdev->num_dbc; ++i) + qaic_debugfs_add_dbc_entry(qdev, i, minor->debugfs_root); + + debugfs_create_file("bootlog", 0444, minor->debugfs_root, qdev, + &bootlog_fops); +} + +static struct bootlog_page *alloc_bootlog_page(struct qaic_device *qdev) +{ + struct bootlog_page *page; + + page = (struct bootlog_page *)__get_free_page(GFP_KERNEL); + if (!page) + return page; + + page->size = PAGE_SIZE; + page->offset = sizeof(*page); + list_add_tail(&page->node, &qdev->bootlog); + + return page; +} + +static int reset_bootlog(struct qaic_device *qdev) +{ + struct bootlog_page *page; + struct bootlog_page *i; + + list_for_each_entry_safe(page, i, &qdev->bootlog, node) { + list_del(&page->node); + free_page((unsigned long)page); + } + + page = alloc_bootlog_page(qdev); + if (!page) + return -ENOMEM; + + return 0; +} + +static void *bootlog_get_space(struct qaic_device *qdev, unsigned int size) +{ + struct bootlog_page *page; + + page = list_last_entry(&qdev->bootlog, struct bootlog_page, node); + + if (size > page->size - sizeof(*page)) + return NULL; + + if (page->offset + size >= page->size) { + page = alloc_bootlog_page(qdev); + if (!page) + return NULL; + } + + return (void *)page + page->offset; +} + +static void bootlog_commit(struct qaic_device *qdev, unsigned int size) +{ + struct bootlog_page *page; + + page = list_last_entry(&qdev->bootlog, struct bootlog_page, node); + + page->offset += size; +} + +static void bootlog_log(struct work_struct *work) +{ + struct bootlog_msg *msg = container_of(work, struct bootlog_msg, work); + struct qaic_device *qdev = msg->qdev; + unsigned int len = strlen(msg->str) + 1; + void *log; + + mutex_lock(&qdev->bootlog_mutex); + log = bootlog_get_space(qdev, len); + if (log) { + memcpy(log, msg, len); + bootlog_commit(qdev, len); + } + mutex_unlock(&qdev->bootlog_mutex); + mhi_queue_buf(qdev->bootlog_ch, DMA_FROM_DEVICE, msg, BOOTLOG_MSG_SIZE, + MHI_EOT); +} + +static int qaic_bootlog_mhi_probe(struct mhi_device *mhi_dev, + const struct mhi_device_id *id) +{ + struct qaic_device *qdev; + struct bootlog_msg *msg; + int ret; + int i; + + qdev = pci_get_drvdata(to_pci_dev(mhi_dev->mhi_cntrl->cntrl_dev)); + + dev_set_drvdata(&mhi_dev->dev, qdev); + qdev->bootlog_ch = mhi_dev; + + qdev->bootlog_wq = alloc_ordered_workqueue("qaic_bootlog", 0); + if (!qdev->bootlog_wq) { + ret = -ENOMEM; + goto fail; + } + + mutex_lock(&qdev->bootlog_mutex); + ret = reset_bootlog(qdev); + mutex_unlock(&qdev->bootlog_mutex); + if (ret) + goto reset_fail; + + ret = mhi_prepare_for_transfer(qdev->bootlog_ch); + + if (ret) + goto prepare_fail; + + for (i = 0; i < BOOTLOG_POOL_SIZE; i++) { + msg = kmalloc(sizeof(*msg), GFP_KERNEL); + if (!msg) { + ret = -ENOMEM; + goto alloc_fail; + } + + msg->qdev = qdev; + INIT_WORK(&msg->work, bootlog_log); + + ret = mhi_queue_buf(qdev->bootlog_ch, DMA_FROM_DEVICE, + msg, BOOTLOG_MSG_SIZE, MHI_EOT); + if (ret) + goto queue_fail; + } + + return 0; + +queue_fail: +alloc_fail: + mhi_unprepare_from_transfer(qdev->bootlog_ch); +prepare_fail: +reset_fail: + flush_workqueue(qdev->bootlog_wq); + destroy_workqueue(qdev->bootlog_wq); +fail: + return ret; +} + +static void qaic_bootlog_mhi_remove(struct mhi_device *mhi_dev) +{ + struct qaic_device *qdev; + + qdev = dev_get_drvdata(&mhi_dev->dev); + + mhi_unprepare_from_transfer(qdev->bootlog_ch); + flush_workqueue(qdev->bootlog_wq); + destroy_workqueue(qdev->bootlog_wq); +} + +static void qaic_bootlog_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ +} + +static void qaic_bootlog_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct qaic_device *qdev = dev_get_drvdata(&mhi_dev->dev); + struct bootlog_msg *msg = mhi_result->buf_addr; + + if (mhi_result->transaction_status) { + kfree(msg); + return; + } + + /* force a null at the end of the transferred string */ + msg->str[mhi_result->bytes_xferd - 1] = 0; + + queue_work(qdev->bootlog_wq, &msg->work); +} + +static const struct mhi_device_id qaic_bootlog_mhi_match_table[] = { + { .chan = "QAIC_LOGGING", }, + {}, +}; + +static struct mhi_driver qaic_bootlog_mhi_driver = { + .id_table = qaic_bootlog_mhi_match_table, + .remove = qaic_bootlog_mhi_remove, + .probe = qaic_bootlog_mhi_probe, + .ul_xfer_cb = qaic_bootlog_mhi_ul_xfer_cb, + .dl_xfer_cb = qaic_bootlog_mhi_dl_xfer_cb, + .driver = { + .name = "qaic_bootlog", + .owner = THIS_MODULE, + }, +}; + +void qaic_logging_register(void) +{ + int ret; + + ret = mhi_driver_register(&qaic_bootlog_mhi_driver); + if (ret) + DRM_DEBUG("qaic: logging register failed %d\n", ret); +} + +void qaic_logging_unregister(void) +{ + mhi_driver_unregister(&qaic_bootlog_mhi_driver); +} diff --git a/drivers/gpu/drm/qaic/qaic_debugfs.h b/drivers/gpu/drm/qaic/qaic_debugfs.h new file mode 100644 index 0000000..3d7878c --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_debugfs.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +/* Copyright (c) 2020, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#ifndef __QAIC_DEBUGFS_H__ +#define __QAIC_DEBUGFS_H__ + +#include +#include +#include + +#define QAIC_DEBUGFS_ROOT "qaic" +#define QAIC_DEBUGFS_DBC_PREFIX "dbc" +#define QAIC_DEBUGFS_DBC_FIFO_SIZE "fifo_size" +#define QAIC_DEBUGFS_DBC_QUEUED "queued" + +extern struct dentry *qaic_debugfs_dir; + +#ifdef CONFIG_DEBUG_FS + +void qaic_logging_register(void); +void qaic_logging_unregister(void); +void qaic_debugfs_init(struct drm_minor *minor); + +#else /* !CONFIG_DEBUG_FS */ + +void qaic_logging_register(void) {} +void qaic_logging_unregister(void) {} +void qaic_debugfs_init(struct drm_minor *minor) {} + +#endif /* !CONFIG_DEBUG_FS */ +#endif /* __QAIC_DEBUGFS_H__ */ From patchwork Mon Aug 15 18:42:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F94AC00140 for ; Mon, 15 Aug 2022 19:28:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244612AbiHOT2X (ORCPT ); Mon, 15 Aug 2022 15:28:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344991AbiHOT1f (ORCPT ); Mon, 15 Aug 2022 15:27:35 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1433D5B7B9; Mon, 15 Aug 2022 11:43:23 -0700 (PDT) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FFMnKg029035; Mon, 15 Aug 2022 18:43:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=1ajGvzAc2NsoHpI1OqFVPSBqdyJX5P1r4804eNDBHjU=; b=I8nA1WpFJz2uafraeLoqKMB6r3LH/YRxepaOOEp9OzxLM+nAsM3K3RdZLq2jMX80gSoU KcI0EaZe67CACMzwpUI/aTLvkWSgWdmL66KUCYJ0bybhRvV2VxBZlYNO2WsvlLxh22Fk Z2tFhHnkC34I/wtQxL2MhLEL/x5hmZ8ijJSn2QA+j3lM46IZb39zrvK4zU9VAcQs322q 2Dlug2IyKsFSGTsS6lcCq45PWP+qWtbupHywyYV0KvxONinvpGrK/XUofL5VYMFC+C88 gpfICECHiDciHwa/gSO9D1M0u1tYSwmAqztUJckH3tRslg03dI0unUzgpiqHs9TWgL+T 8g== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx420p40e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:16 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIhFUv031770 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:15 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:14 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 08/14] drm/qaic: Add RAS component Date: Mon, 15 Aug 2022 12:42:30 -0600 Message-ID: <1660588956-24027-9-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: zOA23Wr2VNT3hBn1q8NisCwGi3fXOlT6 X-Proofpoint-ORIG-GUID: zOA23Wr2VNT3hBn1q8NisCwGi3fXOlT6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 clxscore=1015 lowpriorityscore=0 mlxlogscore=999 impostorscore=0 adultscore=0 mlxscore=0 spamscore=0 priorityscore=1501 suspectscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org A QAIC device can report a number of different internal errors. The RAS component services these reports by logging them for the sysadmin, and collecting statistics. Change-Id: Ib2f0731daf9a4afe05724e550c72bf32313e79bc Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/qaic_ras.c | 653 ++++++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/qaic/qaic_ras.h | 11 + 2 files changed, 664 insertions(+) create mode 100644 drivers/gpu/drm/qaic/qaic_ras.c create mode 100644 drivers/gpu/drm/qaic/qaic_ras.h diff --git a/drivers/gpu/drm/qaic/qaic_ras.c b/drivers/gpu/drm/qaic/qaic_ras.c new file mode 100644 index 0000000..ab51b2d --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_ras.c @@ -0,0 +1,653 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include +#include +#include +#include + +#include "qaic.h" +#include "qaic_ras.h" + +#define MAGIC 0x55AA +#define VERSION 0x1 +#define HDR_SZ 12 +#define NUM_TEMP_LVL 3 + +enum msg_type { + MSG_PUSH, /* async push from device */ + MSG_REQ, /* sync request to device */ + MSG_RESP, /* sync response from device */ +}; + +enum err_type { + CE, /* correctable error */ + UE, /* uncorrectable error */ + UE_NF, /* uncorrectable error that is non-fatal, expect a disruption */ + ERR_TYPE_MAX, +}; + +static const char * const err_type_str[] = { + [CE] = "Correctable", + [UE] = "Uncorrectable", + [UE_NF] = "Uncorrectable Non-Fatal", +}; + +static const char * const err_class_str[] = { + [CE] = "Warning", + [UE] = "Fatal", + [UE_NF] = "Warning", +}; + +enum err_source { + SOC_MEM, + PCIE, + DDR, + SYS_BUS1, + SYS_BUS2, + NSP_MEM, + TSENS, +}; + +static const char * const err_src_str[TSENS + 1] = { + [SOC_MEM] = "SoC Memory", + [PCIE] = "PCIE", + [DDR] = "DDR", + [SYS_BUS1] = "System Bus source 1", + [SYS_BUS2] = "System Bus source 2", + [NSP_MEM] = "NSP Memory", + [TSENS] = "Temperature Sensors", +}; + +struct ras_data { + /* header start */ + /* Magic number to validate the message */ + u16 magic; + /* RAS version number */ + u16 ver; + u32 seq_num; + /* RAS message type */ + u8 type; + u8 id; + /* Size of RAS message without the header in byte */ + u16 len; + /* header end */ + + u32 result; + /* + * Error source + * 0 : SoC Memory + * 1 : PCIE + * 2 : DDR + * 3 : System Bus source 1 + * 4 : System Bus source 2 + * 5 : NSP Memory + * 6 : Temperature Sensors + */ + u32 source; + /* + * Stores the error type, there are three types of error in RAS + * 0 : correctable error (CE) + * 1 : uncorrectable error (UE) + * 2 : uncorrectable error that is non-fatal (UE_NF) + */ + u32 err_type; + u32 err_threshold; + u32 ce_count; + u32 ue_count; + u32 intr_num; + /* Data specific to error source */ + u8 syndrome[64]; +} __packed; + +struct soc_mem_syndrome { + u64 error_address[8]; +} __packed; + +struct nsp_mem_syndrome { + u32 error_address[8]; + u8 nsp_id; +} __packed; + +struct ddr_syndrome { + u16 instance; + u16 err_type; + u32 count; + u32 irq_status; + u32 data_31_0[2]; + u32 data_63_32[2]; + u32 data_95_64[2]; + u32 data_127_96[2]; + u16 parity_bits; + u16 addr_msb; + u32 addr_lsb; +} __packed; + +struct tsens_syndrome { + u32 threshold_type; + u32 temp; +} __packed; + +struct sysbus1_syndrome { + u8 instance; + u32 slave; + u32 err_type; + u16 addr[8]; +} __packed; + +struct sysbus2_syndrome { + u8 instance; + u8 valid; + u8 word_error; + u8 non_secure; + u8 opc; + u8 error_code; + u8 trans_type; + u8 addr_space; + u16 op_type; + u16 len; + u16 redirect; + u16 path; + u32 ext_id; + u32 lsb2; + u32 msb2; + u32 lsb3; + u32 msb3; +} __packed; + +struct pcie_syndrome { + /* CE info */ + u32 bad_tlp; + u32 bad_dllp; + u32 replay_rollover; + u32 replay_timeout; + u32 rx_err; + u32 internal_ce_count; + /* UE info */ + u8 index; + u32 addr; + /* UE_NF info */ + u32 fc_timeout; + u32 poison_tlp; + u32 ecrc_err; + u32 unsupported_req; + u32 completer_abort; + u32 completion_timeout; +} __packed; + +static const char * const threshold_type_str[NUM_TEMP_LVL] = { + [0] = "lower", + [1] = "upper", + [2] = "critical", +}; + +static void ras_msg_to_cpu(struct ras_data *msg) +{ + struct sysbus1_syndrome *sysbus1_syndrome = + (struct sysbus1_syndrome *)&msg->syndrome[0]; + struct sysbus2_syndrome *sysbus2_syndrome = + (struct sysbus2_syndrome *)&msg->syndrome[0]; + struct soc_mem_syndrome *soc_syndrome = + (struct soc_mem_syndrome *)&msg->syndrome[0]; + struct nsp_mem_syndrome *nsp_syndrome = + (struct nsp_mem_syndrome *)&msg->syndrome[0]; + struct tsens_syndrome *tsens_syndrome = + (struct tsens_syndrome *)&msg->syndrome[0]; + struct pcie_syndrome *pcie_syndrome = + (struct pcie_syndrome *)&msg->syndrome[0]; + struct ddr_syndrome *ddr_syndrome = + (struct ddr_syndrome *)&msg->syndrome[0]; + int i; + + le16_to_cpus(&msg->magic); + le16_to_cpus(&msg->ver); + le32_to_cpus(&msg->seq_num); + le16_to_cpus(&msg->len); + le32_to_cpus(&msg->result); + le32_to_cpus(&msg->source); + le32_to_cpus(&msg->err_type); + le32_to_cpus(&msg->err_threshold); + le32_to_cpus(&msg->ce_count); + le32_to_cpus(&msg->ue_count); + le32_to_cpus(&msg->intr_num); + + switch (msg->source) { + case SOC_MEM: + for (i = 0; i < 8; i++) + le64_to_cpus(&soc_syndrome->error_address[i]); + break; + case PCIE: + le32_to_cpus(&pcie_syndrome->bad_tlp); + le32_to_cpus(&pcie_syndrome->bad_dllp); + le32_to_cpus(&pcie_syndrome->replay_rollover); + le32_to_cpus(&pcie_syndrome->replay_timeout); + le32_to_cpus(&pcie_syndrome->rx_err); + le32_to_cpus(&pcie_syndrome->internal_ce_count); + le32_to_cpus(&pcie_syndrome->fc_timeout); + le32_to_cpus(&pcie_syndrome->poison_tlp); + le32_to_cpus(&pcie_syndrome->ecrc_err); + le32_to_cpus(&pcie_syndrome->unsupported_req); + le32_to_cpus(&pcie_syndrome->completer_abort); + le32_to_cpus(&pcie_syndrome->completion_timeout); + le32_to_cpus(&pcie_syndrome->addr); + break; + case DDR: + le16_to_cpus(&ddr_syndrome->instance); + le16_to_cpus(&ddr_syndrome->err_type); + le32_to_cpus(&ddr_syndrome->count); + le32_to_cpus(&ddr_syndrome->irq_status); + le32_to_cpus(&ddr_syndrome->data_31_0[0]); + le32_to_cpus(&ddr_syndrome->data_31_0[1]); + le32_to_cpus(&ddr_syndrome->data_63_32[0]); + le32_to_cpus(&ddr_syndrome->data_63_32[1]); + le32_to_cpus(&ddr_syndrome->data_95_64[0]); + le32_to_cpus(&ddr_syndrome->data_95_64[1]); + le32_to_cpus(&ddr_syndrome->data_127_96[0]); + le32_to_cpus(&ddr_syndrome->data_127_96[1]); + le16_to_cpus(&ddr_syndrome->parity_bits); + le16_to_cpus(&ddr_syndrome->addr_msb); + le32_to_cpus(&ddr_syndrome->addr_lsb); + break; + case SYS_BUS1: + le32_to_cpus(&sysbus1_syndrome->slave); + le32_to_cpus(&sysbus1_syndrome->err_type); + for (i = 0; i < 8; i++) + le16_to_cpus(&sysbus1_syndrome->addr[i]); + break; + case SYS_BUS2: + le16_to_cpus(&sysbus2_syndrome->op_type); + le16_to_cpus(&sysbus2_syndrome->len); + le16_to_cpus(&sysbus2_syndrome->redirect); + le16_to_cpus(&sysbus2_syndrome->path); + le32_to_cpus(&sysbus2_syndrome->ext_id); + le32_to_cpus(&sysbus2_syndrome->lsb2); + le32_to_cpus(&sysbus2_syndrome->msb2); + le32_to_cpus(&sysbus2_syndrome->lsb3); + le32_to_cpus(&sysbus2_syndrome->msb3); + break; + case NSP_MEM: + for (i = 0; i < 8; i++) + le32_to_cpus(&nsp_syndrome->error_address[i]); + break; + case TSENS: + le32_to_cpus(&tsens_syndrome->threshold_type); + le32_to_cpus(&tsens_syndrome->temp); + break; + } +} + +static void decode_ras_msg(struct qaic_device *qdev, struct ras_data *msg) +{ + struct sysbus1_syndrome *sysbus1_syndrome = + (struct sysbus1_syndrome *)&msg->syndrome[0]; + struct sysbus2_syndrome *sysbus2_syndrome = + (struct sysbus2_syndrome *)&msg->syndrome[0]; + struct soc_mem_syndrome *soc_syndrome = + (struct soc_mem_syndrome *)&msg->syndrome[0]; + struct nsp_mem_syndrome *nsp_syndrome = + (struct nsp_mem_syndrome *)&msg->syndrome[0]; + struct tsens_syndrome *tsens_syndrome = + (struct tsens_syndrome *)&msg->syndrome[0]; + struct pcie_syndrome *pcie_syndrome = + (struct pcie_syndrome *)&msg->syndrome[0]; + struct ddr_syndrome *ddr_syndrome = + (struct ddr_syndrome *)&msg->syndrome[0]; + char *class; + char *level; + + if (msg->magic != MAGIC) { + pci_warn(qdev->pdev, "Dropping RAS message with invalid magic %x\n", msg->magic); + return; + } + + if (msg->ver != VERSION) { + pci_warn(qdev->pdev, "Dropping RAS message with invalid version %d\n", msg->ver); + return; + } + + if (msg->type != MSG_PUSH) { + pci_warn(qdev->pdev, "Dropping non-PUSH RAS message\n"); + return; + } + + if (msg->len != sizeof(*msg) - HDR_SZ) { + pci_warn(qdev->pdev, "Dropping RAS message with invalid len %d\n", msg->len); + return; + } + + if (msg->err_type >= ERR_TYPE_MAX) { + pci_warn(qdev->pdev, "Dropping RAS message with err type %d\n", msg->err_type); + return; + } + + if (msg->err_type == UE) + level = KERN_ERR; + else + level = KERN_WARNING; + + switch (msg->source) { + case SOC_MEM: + pci_printk(level, qdev->pdev, "RAS event.\nClass:%s\nDescription:%s %s %s\nSyndrome:\n 0x%llx\n 0x%llx\n 0x%llx\n 0x%llx\n 0x%llx\n 0x%llx\n 0x%llx\n 0x%llx\n", + err_class_str[msg->err_type], + err_type_str[msg->err_type], + "error from", + err_src_str[msg->source], + soc_syndrome->error_address[0], + soc_syndrome->error_address[1], + soc_syndrome->error_address[2], + soc_syndrome->error_address[3], + soc_syndrome->error_address[4], + soc_syndrome->error_address[5], + soc_syndrome->error_address[6], + soc_syndrome->error_address[7]); + break; + case PCIE: + pci_printk(level, qdev->pdev, "RAS event.\nClass:%s\nDescription:%s %s %s\n", + err_class_str[msg->err_type], + err_type_str[msg->err_type], + "error from", + err_src_str[msg->source]); + + switch (msg->err_type) { + case CE: + printk(KERN_WARNING pr_fmt("Syndrome:\n Bad TLP count %d\n Bad DLLP count %d\n Replay Rollover count %d\n Replay Timeout count %d\n Recv Error count %d\n Internal CE count %d\n"), + pcie_syndrome->bad_tlp, + pcie_syndrome->bad_dllp, + pcie_syndrome->replay_rollover, + pcie_syndrome->replay_timeout, + pcie_syndrome->rx_err, + pcie_syndrome->internal_ce_count); + break; + case UE: + printk(KERN_ERR pr_fmt("Syndrome:\n Index %d\n Address 0x%x\n"), + pcie_syndrome->index, pcie_syndrome->addr); + break; + case UE_NF: + printk(KERN_WARNING pr_fmt("Syndrome:\n FC timeout count %d\n Poisoned TLP count %d\n ECRC error count %d\n Unsupported request count %d\n Completer abort count %d\n Completion timeout count %d\n"), + pcie_syndrome->fc_timeout, + pcie_syndrome->poison_tlp, + pcie_syndrome->ecrc_err, + pcie_syndrome->unsupported_req, + pcie_syndrome->completer_abort, + pcie_syndrome->completion_timeout); + break; + default: + break; + } + break; + case DDR: + pci_printk(level, qdev->pdev, "RAS event.\nClass:%s\nDescription:%s %s %s\nSyndrome:\n Instance %d\n Count %d\n Data 31_0 0x%x 0x%x\n Data 63_32 0x%x 0x%x\n Data 95_64 0x%x 0x%x\n Data 127_96 0x%x 0x%x\n Parity bits 0x%x\n Address msb 0x%x\n Address lsb 0x%x\n", + err_class_str[msg->err_type], + err_type_str[msg->err_type], + "error from", + err_src_str[msg->source], + ddr_syndrome->instance, + ddr_syndrome->count, + ddr_syndrome->data_31_0[1], + ddr_syndrome->data_31_0[0], + ddr_syndrome->data_63_32[1], + ddr_syndrome->data_63_32[0], + ddr_syndrome->data_95_64[1], + ddr_syndrome->data_95_64[0], + ddr_syndrome->data_127_96[1], + ddr_syndrome->data_127_96[0], + ddr_syndrome->parity_bits, + ddr_syndrome->addr_msb, + ddr_syndrome->addr_lsb); + break; + case SYS_BUS1: + pci_printk(level, qdev->pdev, "RAS event.\nClass:%s\nDescription:%s %s %s\nSyndome:\n instance %d\n %s\n err_type %d\n address0 0x%x\n address1 0x%x\n address2 0x%x\n address3 0x%x\n address4 0x%x\n address5 0x%x\n address6 0x%x\n address7 0x%x\n", + err_class_str[msg->err_type], + err_type_str[msg->err_type], + "error from", + err_src_str[msg->source], + sysbus1_syndrome->instance, + sysbus1_syndrome->slave ? "Slave" : "Master", + sysbus1_syndrome->err_type, + sysbus1_syndrome->addr[0], + sysbus1_syndrome->addr[1], + sysbus1_syndrome->addr[2], + sysbus1_syndrome->addr[3], + sysbus1_syndrome->addr[4], + sysbus1_syndrome->addr[5], + sysbus1_syndrome->addr[6], + sysbus1_syndrome->addr[7]); + break; + case SYS_BUS2: + pci_printk(level, qdev->pdev, "RAS event.\nClass:%s\nDescription:%s %s %s\nSyndome:\n instance %d\n valid %d\n word error %d\n non-secure %d\n opc %d\n error code %d\n transaction type %d\n address space %d\n operation type %d\n len %d\n redirect %d\n path %d\n ext_id %d\n lsb2 %d\n msb2 %d\n lsb3 %d\n msb3 %d\n", + err_class_str[msg->err_type], + err_type_str[msg->err_type], + "error from", + err_src_str[msg->source], + sysbus2_syndrome->instance, + sysbus2_syndrome->valid, + sysbus2_syndrome->word_error, + sysbus2_syndrome->non_secure, + sysbus2_syndrome->opc, + sysbus2_syndrome->error_code, + sysbus2_syndrome->trans_type, + sysbus2_syndrome->addr_space, + sysbus2_syndrome->op_type, + sysbus2_syndrome->len, + sysbus2_syndrome->redirect, + sysbus2_syndrome->path, + sysbus2_syndrome->ext_id, + sysbus2_syndrome->lsb2, + sysbus2_syndrome->msb2, + sysbus2_syndrome->lsb3, + sysbus2_syndrome->msb3); + break; + case NSP_MEM: + pci_printk(level, qdev->pdev, "RAS event.\nClass:%s\nDescription:%s %s %s\nSyndrome:\n NSP ID %d\n 0x%x\n 0x%x\n 0x%x\n 0x%x\n 0x%x\n 0x%x\n 0x%x\n 0x%x\n", + err_class_str[msg->err_type], + err_type_str[msg->err_type], + "error from", + err_src_str[msg->source], + nsp_syndrome->nsp_id, + nsp_syndrome->error_address[0], + nsp_syndrome->error_address[1], + nsp_syndrome->error_address[2], + nsp_syndrome->error_address[3], + nsp_syndrome->error_address[4], + nsp_syndrome->error_address[5], + nsp_syndrome->error_address[6], + nsp_syndrome->error_address[7]); + break; + case TSENS: + if (tsens_syndrome->threshold_type >= NUM_TEMP_LVL) { + pci_warn(qdev->pdev, "Dropping RAS message with invalid temp threshold %d\n", + tsens_syndrome->threshold_type); + break; + } + + if (msg->err_type) + class = "Fatal"; + else if (tsens_syndrome->threshold_type) + class = "Critical"; + else + class = "Warning"; + + pci_printk(level, qdev->pdev, "RAS event.\nClass:%s\nDescription:%s %s %s\nSyndrome:\n %s threshold\n %d deg C\n", + class, + err_type_str[msg->err_type], + "error from", + err_src_str[msg->source], + threshold_type_str[tsens_syndrome->threshold_type], + tsens_syndrome->temp); + break; + } + + /* Uncorrectable errors are fatal */ + if (msg->err_type == UE) + mhi_soc_reset(qdev->mhi_cntl); + + switch (msg->err_type) { + case CE: + if (qdev->ce_count != UINT_MAX) + qdev->ce_count++; + break; + case UE: + if (qdev->ce_count != UINT_MAX) + qdev->ue_count++; + break; + case UE_NF: + if (qdev->ce_count != UINT_MAX) + qdev->ue_nf_count++; + break; + default: + /* not possible */ + break; + } +} + +static ssize_t ce_count_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct pci_dev *pdev = container_of(dev, struct pci_dev, dev); + struct qaic_device *qdev = pci_get_drvdata(pdev); + + return snprintf(buf, PAGE_SIZE, "%d\n", qdev->ce_count); +} + +static ssize_t ue_count_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct pci_dev *pdev = container_of(dev, struct pci_dev, dev); + struct qaic_device *qdev = pci_get_drvdata(pdev); + + return snprintf(buf, PAGE_SIZE, "%d\n", qdev->ue_count); +} + +static ssize_t ue_nonfatal_count_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + struct pci_dev *pdev = container_of(dev, struct pci_dev, dev); + struct qaic_device *qdev = pci_get_drvdata(pdev); + + return snprintf(buf, PAGE_SIZE, "%d\n", qdev->ue_nf_count); +} + +static DEVICE_ATTR_RO(ce_count); +static DEVICE_ATTR_RO(ue_count); +static DEVICE_ATTR_RO(ue_nonfatal_count); + +static struct attribute *ras_attrs[] = { + &dev_attr_ce_count.attr, + &dev_attr_ue_count.attr, + &dev_attr_ue_nonfatal_count.attr, + NULL, +}; + +static struct attribute_group ras_group = { + .attrs = ras_attrs, +}; + +static int qaic_ras_mhi_probe(struct mhi_device *mhi_dev, + const struct mhi_device_id *id) +{ + struct qaic_device *qdev; + struct ras_data *resp; + int ret; + + qdev = pci_get_drvdata(to_pci_dev(mhi_dev->mhi_cntrl->cntrl_dev)); + + dev_set_drvdata(&mhi_dev->dev, qdev); + qdev->ras_ch = mhi_dev; + ret = mhi_prepare_for_transfer(qdev->ras_ch); + + if (ret) + return ret; + + resp = kmalloc(sizeof(*resp), GFP_KERNEL); + if (!resp) { + mhi_unprepare_from_transfer(qdev->ras_ch); + return -ENOMEM; + } + + ret = mhi_queue_buf(qdev->ras_ch, DMA_FROM_DEVICE, resp, sizeof(*resp), + MHI_EOT); + if (ret) { + mhi_unprepare_from_transfer(qdev->ras_ch); + return ret; + } + + ret = device_add_group(&qdev->pdev->dev, &ras_group); + if (ret) + pci_dbg(qdev->pdev, "ras add sysfs failed %d\n", ret); + + return 0; +} + +static void qaic_ras_mhi_remove(struct mhi_device *mhi_dev) +{ + struct qaic_device *qdev; + + qdev = dev_get_drvdata(&mhi_dev->dev); + mhi_unprepare_from_transfer(qdev->ras_ch); + qdev->ras_ch = NULL; + device_remove_group(&qdev->pdev->dev, &ras_group); +} + +static void qaic_ras_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ +} + +static void qaic_ras_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct qaic_device *qdev = dev_get_drvdata(&mhi_dev->dev); + struct ras_data *msg = mhi_result->buf_addr; + int ret; + + if (mhi_result->transaction_status) { + kfree(msg); + return; + } + + ras_msg_to_cpu(msg); + decode_ras_msg(qdev, msg); + + ret = mhi_queue_buf(qdev->ras_ch, DMA_FROM_DEVICE, msg, sizeof(*msg), + MHI_EOT); + if (ret) { + pci_err(qdev->pdev, "Cannot requeue RAS recv buf %d\n", ret); + kfree(msg); + } +} + +static const struct mhi_device_id qaic_ras_mhi_match_table[] = { + { .chan = "QAIC_STATUS", }, + {}, +}; + +static struct mhi_driver qaic_ras_mhi_driver = { + .id_table = qaic_ras_mhi_match_table, + .remove = qaic_ras_mhi_remove, + .probe = qaic_ras_mhi_probe, + .ul_xfer_cb = qaic_ras_mhi_ul_xfer_cb, + .dl_xfer_cb = qaic_ras_mhi_dl_xfer_cb, + .driver = { + .name = "qaic_ras", + .owner = THIS_MODULE, + }, +}; + +void qaic_ras_register(void) +{ + int ret; + + ret = mhi_driver_register(&qaic_ras_mhi_driver); + if (ret) + pr_debug("qaic: ras register failed %d\n", ret); +} + +void qaic_ras_unregister(void) +{ + mhi_driver_unregister(&qaic_ras_mhi_driver); +} diff --git a/drivers/gpu/drm/qaic/qaic_ras.h b/drivers/gpu/drm/qaic/qaic_ras.h new file mode 100644 index 0000000..e431680 --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_ras.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2020, The Linux Foundation. All rights reserved. + */ + +#ifndef __QAIC_RAS_H__ +#define __QAIC_RAS_H__ + +void qaic_ras_register(void); +void qaic_ras_unregister(void); +#endif /* __QAIC_RAS_H__ */ From patchwork Mon Aug 15 18:42:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597576 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D13EC00140 for ; Mon, 15 Aug 2022 19:28:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244245AbiHOT2d (ORCPT ); Mon, 15 Aug 2022 15:28:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37284 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345000AbiHOT1f (ORCPT ); Mon, 15 Aug 2022 15:27:35 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BC315B7AF; Mon, 15 Aug 2022 11:43:24 -0700 (PDT) Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FHOijg014365; Mon, 15 Aug 2022 18:43:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=D38fvzccgCaoHEuT7vgasOEDfrYy8tMpfCGphIDUodg=; b=bZjrVdToHYImTbEbybFcpMUf/xrQ01CWRnao1YiPF+XDtcuFv5N2liADrVpp5B654d5z BSrZZJhqM0/lhX/2FFbq37rtbGf4xfbRXgcMzC6XMqifVL4ZVNUXPAfB/rMN+Cxi8Iw3 15N+KlYClcNNFdl5Qlnians5v5uk3Ujl692Wgb5WZX/obGvEVuyjbt0ro5JtVC/0omyI JCZtyMhy0KoaWWO/o1WlKtkFd0qgscpM3nQKmB3GRp6cy3ZSMJnHCGO8Nizer2CrFkDR 1pw4u5HL2flAOramEwZvUj4lywOPNsw9EMuka1n/KfEjuc9+E7YK+zKukqw7sS/FPMrL cQ== Received: from nalasppmta02.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx58f5yxm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:17 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIhGvN032572 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:16 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:15 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 09/14] drm/qaic: Add ssr component Date: Mon, 15 Aug 2022 12:42:31 -0600 Message-ID: <1660588956-24027-10-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: l_8jhNeRgHKdZxC4DQHy7jYsgBfY0JGS X-Proofpoint-GUID: l_8jhNeRgHKdZxC4DQHy7jYsgBfY0JGS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 phishscore=0 adultscore=0 bulkscore=0 impostorscore=0 mlxscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 priorityscore=1501 spamscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org A QAIC device supports the concept of subsystem restart (ssr). If a processing unit for a workload crashes, it is possible to reset that unit instead of crashing the device. Since such an error is likely related to the workload code that was running, it is possible to collect a crashdump of the workload for offline analysis. Change-Id: I77aa21ecbf0f730d8736a7465285ce5290ed3745 Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/qaic_ssr.c | 889 ++++++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/qaic/qaic_ssr.h | 13 + 2 files changed, 902 insertions(+) create mode 100644 drivers/gpu/drm/qaic/qaic_ssr.c create mode 100644 drivers/gpu/drm/qaic/qaic_ssr.h diff --git a/drivers/gpu/drm/qaic/qaic_ssr.c b/drivers/gpu/drm/qaic/qaic_ssr.c new file mode 100644 index 0000000..826361b --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_ssr.c @@ -0,0 +1,889 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include +#include +#include +#include +#include +#include + +#include "qaic.h" +#include "qaic_ssr.h" +#include "qaic_trace.h" + +#define MSG_BUF_SZ 32 +#define MAX_PAGE_DUMP_RESP 4 /* It should always be in powers of 2 */ + +enum ssr_cmds { + DEBUG_TRANSFER_INFO = BIT(0), + DEBUG_TRANSFER_INFO_RSP = BIT(1), + MEMORY_READ = BIT(2), + MEMORY_READ_RSP = BIT(3), + DEBUG_TRANSFER_DONE = BIT(4), + DEBUG_TRANSFER_DONE_RSP = BIT(5), + SSR_EVENT = BIT(8), + SSR_EVENT_RSP = BIT(9), +}; + +enum ssr_events { + SSR_EVENT_NACK = BIT(0), + BEFORE_SHUTDOWN = BIT(1), + AFTER_SHUTDOWN = BIT(2), + BEFORE_POWER_UP = BIT(3), + AFTER_POWER_UP = BIT(4), +}; + +struct debug_info_table { + /* Save preferences. Default is mandatory */ + u64 save_perf; + /* Base address of the debug region */ + u64 mem_base; + /* Size of debug region in bytes */ + u64 len; + /* Description */ + char desc[20]; + /* Filename of debug region */ + char filename[20]; +}; + +struct _ssr_hdr { + __le32 cmd; + __le32 len; + __le32 dbc_id; +}; + +struct ssr_hdr { + u32 cmd; + u32 len; + u32 dbc_id; +}; + +struct ssr_debug_transfer_info { + struct ssr_hdr hdr; + u32 resv; + u64 tbl_addr; + u64 tbl_len; +} __packed; + +struct ssr_debug_transfer_info_rsp { + struct _ssr_hdr hdr; + __le32 ret; +} __packed; + +struct ssr_memory_read { + struct _ssr_hdr hdr; + __le32 resv; + __le64 addr; + __le64 len; +} __packed; + +struct ssr_memory_read_rsp { + struct _ssr_hdr hdr; + __le32 resv; + u8 data[]; +} __packed; + +struct ssr_debug_transfer_done { + struct _ssr_hdr hdr; + __le32 resv; +} __packed; + +struct ssr_debug_transfer_done_rsp { + struct _ssr_hdr hdr; + __le32 ret; +} __packed; + +struct ssr_event { + struct ssr_hdr hdr; + u32 event; +} __packed; + +struct ssr_event_rsp { + struct _ssr_hdr hdr; + __le32 event; +} __packed; + +struct ssr_resp { + /* Work struct to schedule work coming on QAIC_SSR channel */ + struct work_struct work; + /* Root struct of device, used to access device resources */ + struct qaic_device *qdev; + /* Buffer used by MHI for transfer requests */ + u8 data[] __aligned(8); +}; + +/* SSR crashdump book keeping structure */ +struct ssr_dump_info { + /* DBC associated with this SSR crashdump */ + struct dma_bridge_chan *dbc; + /* + * It will be used when we complete the crashdump download and switch + * to waiting on SSR events + */ + struct ssr_resp *resp; + /* We use this buffer to queue Crashdump downloading requests */ + struct ssr_resp *dump_resp; + /* TRUE: dump_resp is queued for MHI transaction. FALSE: Otherwise */ + bool dump_resp_queued; + /* TRUE: mem_rd_buf is queued for MHI transaction. FALSE: Otherwise */ + bool mem_rd_buf_queued; + /* MEMORY READ request MHI buffer.*/ + struct ssr_memory_read *mem_rd_buf; + /* Address of table in host */ + void *tbl_addr; + /* Ptr to the entire dump */ + void *dump_addr; + /* Address of table in device/target */ + u64 tbl_addr_dev; + /* Total size of table */ + u64 tbl_len; + /* Entire crashdump size */ + u64 dump_sz; + /* Size of the buffer queued in for MHI transfer */ + u64 resp_buf_sz; + /* + * Crashdump will be collected chunk by chunk and this is max size of + * one chunk + */ + u64 chunk_sz; + /* Offset of table(tbl_addr) where the new chunk will be dumped */ + u64 tbl_off; + /* Points to the table entry we are currently downloading */ + struct debug_info_table *tbl_ent; + /* Number of bytes downloaded for current entry in table */ + u64 tbl_ent_rd; + /* Offset of crashdump(dump_addr) where the new chunk will be dumped */ + u64 dump_off; +}; + +struct dump_file_meta { + u64 size; /* Total size of the entire dump */ + u64 tbl_len; /* Length of the table in byte */ +}; + +/* + * Layout of crashdump + * +------------------------------------------+ + * | Crashdump Meta structure | + * | type: struct dump_file_meta | + * +------------------------------------------+ + * | Crashdump Table | + * | type: array of struct debug_info_table | + * | | + * | | + * | | + * +------------------------------------------+ + * | Crashdump | + * | | + * | | + * | | + * | | + * | | + * +------------------------------------------+ + */ + +static void free_ssr_dump_buf(struct ssr_dump_info *dump_info) +{ + if (!dump_info) + return; + if (!dump_info->mem_rd_buf_queued) + kfree(dump_info->mem_rd_buf); + if (!dump_info->dump_resp_queued) + kfree(dump_info->dump_resp); + trace_qaic_ssr_dump(dump_info->dbc->qdev, "SSR releasing resources required during crashdump collection"); + vfree(dump_info->tbl_addr); + vfree(dump_info->dump_addr); + dump_info->dbc->dump_info = NULL; + kfree(dump_info); +} + +void clean_up_ssr(struct qaic_device *qdev, u32 dbc_id) +{ + dbc_exit_ssr(qdev, dbc_id); + free_ssr_dump_buf(qdev->dbc[dbc_id].dump_info); +} + +static int alloc_dump(struct ssr_dump_info *dump_info) +{ + struct debug_info_table *tbl_ent = dump_info->tbl_addr; + struct dump_file_meta *dump_meta; + u64 tbl_sz_lp = 0; + u64 sz = 0; + + while (tbl_sz_lp < dump_info->tbl_len) { + le64_to_cpus(&tbl_ent->save_perf); + le64_to_cpus(&tbl_ent->mem_base); + le64_to_cpus(&tbl_ent->len); + + if (tbl_ent->len == 0) { + pci_warn(dump_info->dump_resp->qdev->pdev, "An entry in dump table points to 0 len segment. Entry index %llu desc %.20s filename %.20s.\n", + tbl_sz_lp / sizeof(*tbl_ent), tbl_ent->desc, + tbl_ent->filename); + return -EINVAL; + } + + sz += tbl_ent->len; + tbl_ent++; + tbl_sz_lp += sizeof(*tbl_ent); + } + + dump_info->dump_sz = sz + dump_info->tbl_len + sizeof(*dump_meta); + /* Actual crashdump will be offsetted by crashdump meta and table */ + dump_info->dump_off = dump_info->tbl_len + sizeof(*dump_meta); + + dump_info->dump_addr = vzalloc(dump_info->dump_sz); + if (!dump_info->dump_addr) { + pci_warn(dump_info->dump_resp->qdev->pdev, "Failed to allocate crashdump memory. Virtual memory requested %llu\n", + dump_info->dump_sz); + return -ENOMEM; + } + + trace_qaic_ssr_dump(dump_info->dbc->qdev, "SSR crashdump memory is allocated. Crashdump collection will be initiated"); + + /* Copy crashdump meta and table */ + dump_meta = dump_info->dump_addr; + dump_meta->size = dump_info->dump_sz; + dump_meta->tbl_len = dump_info->tbl_len; + memcpy(dump_info->dump_addr + sizeof(*dump_meta), dump_info->tbl_addr, + dump_info->tbl_len); + + return 0; +} + +static int send_xfer_done(struct qaic_device *qdev, void *resp, u32 dbc_id) +{ + struct ssr_debug_transfer_done *xfer_done; + int ret; + + xfer_done = kmalloc(sizeof(*xfer_done), GFP_KERNEL); + if (!xfer_done) { + pci_warn(qdev->pdev, "Failed to allocate SSR transfer done request struct. DBC ID %u. Physical memory requested %lu\n", + dbc_id, sizeof(*xfer_done)); + ret = -ENOMEM; + goto out; + } + + ret = mhi_queue_buf(qdev->ssr_ch, DMA_FROM_DEVICE, resp, + MSG_BUF_SZ, MHI_EOT); + if (ret) { + pci_warn(qdev->pdev, "Could not queue SSR transfer done response %d. DBC ID %u.\n", + ret, dbc_id); + goto free_xfer_done; + } + + xfer_done->hdr.cmd = cpu_to_le32(DEBUG_TRANSFER_DONE); + xfer_done->hdr.len = cpu_to_le32(sizeof(*xfer_done)); + xfer_done->hdr.dbc_id = cpu_to_le32(dbc_id); + + ret = mhi_queue_buf(qdev->ssr_ch, DMA_TO_DEVICE, xfer_done, + sizeof(*xfer_done), MHI_EOT); + if (ret) { + pci_warn(qdev->pdev, "Could not send DEBUG TRANSFER DONE %d. DBC ID %u.\n", + ret, dbc_id); + goto free_xfer_done; + } + + return 0; + +free_xfer_done: + kfree(xfer_done); +out: + return ret; +} + +static int send_mem_rd(struct qaic_device *qdev, struct ssr_dump_info *dump_info, + u64 dest_addr, u64 dest_len) +{ + u32 dbc_id = dump_info->dbc->id; + int ret; + + ret = mhi_queue_buf(qdev->ssr_ch, DMA_FROM_DEVICE, + dump_info->dump_resp->data, + dump_info->resp_buf_sz, MHI_EOT); + if (ret) { + pci_warn(qdev->pdev, "Could not queue SSR dump buf %d. DBC ID %u.\n", + ret, dbc_id); + goto out; + } else { + dump_info->dump_resp_queued = true; + } + + dump_info->mem_rd_buf->hdr.cmd = cpu_to_le32(MEMORY_READ); + dump_info->mem_rd_buf->hdr.len = + cpu_to_le32(sizeof(*dump_info->mem_rd_buf)); + dump_info->mem_rd_buf->hdr.dbc_id = cpu_to_le32(dbc_id); + dump_info->mem_rd_buf->addr = cpu_to_le64(dest_addr); + dump_info->mem_rd_buf->len = cpu_to_le64(dest_len); + + ret = mhi_queue_buf(qdev->ssr_ch, DMA_TO_DEVICE, + dump_info->mem_rd_buf, + sizeof(*dump_info->mem_rd_buf), MHI_EOT); + if (ret) + pci_warn(qdev->pdev, "Could not send MEMORY READ %d. DBC ID %u.\n", + ret, dbc_id); + else + dump_info->mem_rd_buf_queued = true; + +out: + return ret; +} + +static int ssr_copy_table(struct ssr_dump_info *dump_info, void *data, u64 len) +{ + if (len > dump_info->tbl_len - dump_info->tbl_off) { + pci_warn(dump_info->dump_resp->qdev->pdev, "Invalid data length of table chunk. Length provided %llu & at most expected length %llu\n", + len, dump_info->tbl_len - dump_info->tbl_off); + return -EINVAL; + } + + memcpy(dump_info->tbl_addr + dump_info->tbl_off, data, len); + + dump_info->tbl_off += len; + + /* Entire table has been downloaded, alloc dump memory */ + if (dump_info->tbl_off == dump_info->tbl_len) { + dump_info->tbl_ent = dump_info->tbl_addr; + trace_qaic_ssr_dump(dump_info->dbc->qdev, "SSR debug table download complete"); + return alloc_dump(dump_info); + } + + return 0; +} + +static int ssr_copy_dump(struct ssr_dump_info *dump_info, void *data, u64 len) +{ + struct debug_info_table *tbl_ent; + + tbl_ent = dump_info->tbl_ent; + + if (len > tbl_ent->len - dump_info->tbl_ent_rd) { + pci_warn(dump_info->dump_resp->qdev->pdev, "Invalid data length of dump chunk. Length provided %llu & at most expected length %llu. Segment details base_addr: 0x%llx len: %llu desc: %.20s filename: %.20s.\n", + len, tbl_ent->len - dump_info->tbl_ent_rd, + tbl_ent->mem_base, tbl_ent->len, tbl_ent->desc, + tbl_ent->filename); + return -EINVAL; + } + + memcpy(dump_info->dump_addr + dump_info->dump_off, data, len); + + dump_info->dump_off += len; + dump_info->tbl_ent_rd += len; + + /* Current segment of the crashdump is complete, move to next one */ + if (tbl_ent->len == dump_info->tbl_ent_rd) { + dump_info->tbl_ent++; + dump_info->tbl_ent_rd = 0; + } + + return 0; +} + +static void ssr_dump_worker(struct work_struct *work) +{ + struct ssr_resp *dump_resp = + container_of(work, struct ssr_resp, work); + struct qaic_device *qdev = dump_resp->qdev; + struct ssr_memory_read_rsp *mem_rd_resp; + struct debug_info_table *tbl_ent; + struct ssr_dump_info *dump_info; + u64 dest_addr, dest_len; + struct _ssr_hdr *_hdr; + struct ssr_hdr hdr; + u64 data_len; + int ret; + + mem_rd_resp = (struct ssr_memory_read_rsp *)dump_resp->data; + _hdr = &mem_rd_resp->hdr; + hdr.cmd = le32_to_cpu(_hdr->cmd); + hdr.len = le32_to_cpu(_hdr->len); + hdr.dbc_id = le32_to_cpu(_hdr->dbc_id); + + if (hdr.dbc_id >= qdev->num_dbc) { + pci_warn(qdev->pdev, "Dropping SSR message with invalid DBC ID %u. DBC ID should be less than %u.\n", + hdr.dbc_id, qdev->num_dbc); + goto reset_device; + } + dump_info = qdev->dbc[hdr.dbc_id].dump_info; + + if (!dump_info) { + pci_warn(qdev->pdev, "Dropping SSR message with invalid dbc id %u. Crashdump is not initiated for this DBC ID.\n", + hdr.dbc_id); + goto reset_device; + } + + dump_info->dump_resp_queued = false; + + if (hdr.cmd != MEMORY_READ_RSP) { + pci_warn(qdev->pdev, "Dropping SSR message with invalid CMD %u. Expected command is %u.\n", + hdr.cmd, MEMORY_READ_RSP); + goto free_dump_info; + } + + if (hdr.len > dump_info->resp_buf_sz) { + pci_warn(qdev->pdev, "Dropping SSR message with invalid length %u. At most length expected is %llu.\n", + hdr.len, dump_info->resp_buf_sz); + goto free_dump_info; + } + + data_len = hdr.len - sizeof(*mem_rd_resp); + + if (dump_info->tbl_off < dump_info->tbl_len) + /* Chunk belongs to table */ + ret = ssr_copy_table(dump_info, mem_rd_resp->data, data_len); + else + /* Chunk belongs to crashdump */ + ret = ssr_copy_dump(dump_info, mem_rd_resp->data, data_len); + + if (ret) + goto free_dump_info; + + if (dump_info->tbl_off < dump_info->tbl_len) { + /* Continue downloading table */ + dest_addr = dump_info->tbl_addr_dev + dump_info->tbl_off; + dest_len = min(dump_info->chunk_sz, + dump_info->tbl_len - dump_info->tbl_off); + ret = send_mem_rd(qdev, dump_info, dest_addr, dest_len); + } else if (dump_info->dump_off < dump_info->dump_sz) { + /* Continue downloading crashdump */ + tbl_ent = dump_info->tbl_ent; + dest_addr = tbl_ent->mem_base + dump_info->tbl_ent_rd; + dest_len = min(dump_info->chunk_sz, + tbl_ent->len - dump_info->tbl_ent_rd); + ret = send_mem_rd(qdev, dump_info, dest_addr, dest_len); + } else { + /* Crashdump download complete */ + trace_qaic_ssr_dump(qdev, "SSR crashdump download complete"); + ret = send_xfer_done(qdev, dump_info->resp->data, hdr.dbc_id); + } + + if (ret) + /* Most likely a MHI xfer has failed */ + goto free_dump_info; + + return; + +free_dump_info: + /* Free the allocated memory */ + free_ssr_dump_buf(dump_info); +reset_device: + /* + * After subsystem crashes in device crashdump collection begins but + * something went wrong while collecting crashdump, now instead of + * handling this error we just reset the device as the best effort has + * been made + */ + mhi_soc_reset(qdev->mhi_cntl); +} + +static struct ssr_dump_info *alloc_dump_info(struct qaic_device *qdev, + struct ssr_debug_transfer_info *debug_info) +{ + struct ssr_dump_info *dump_info; + int nr_page; + int ret; + + le64_to_cpus(&debug_info->tbl_len); + le64_to_cpus(&debug_info->tbl_addr); + + if (debug_info->tbl_len == 0 || + debug_info->tbl_len % sizeof(struct debug_info_table) != 0) { + pci_warn(qdev->pdev, "Invalid table length %llu passed. Table length should be non-zero & multiple of %lu\n", + debug_info->tbl_len, sizeof(struct debug_info_table)); + ret = -EINVAL; + goto out; + } + + /* Allocate SSR crashdump book keeping structure */ + dump_info = kzalloc(sizeof(*dump_info), GFP_KERNEL); + if (!dump_info) { + pci_warn(qdev->pdev, "Failed to allocate SSR dump book keeping buffer. Physical memory requested %lu\n", + sizeof(*dump_info)); + ret = -ENOMEM; + goto out; + } + + /* Allocate SSR crashdump request buffer, used for SSR MEMORY READ */ + nr_page = MAX_PAGE_DUMP_RESP; + while (nr_page > 0) { + dump_info->dump_resp = kzalloc(nr_page * PAGE_SIZE, + GFP_KERNEL | __GFP_NOWARN); + if (dump_info->dump_resp) + break; + nr_page >>= 1; + } + + if (!dump_info->dump_resp) { + pci_warn(qdev->pdev, "Failed to allocate SSR dump response buffer. Physical memory requested %lu\n", + PAGE_SIZE); + ret = -ENOMEM; + goto free_dump_info; + } + + INIT_WORK(&dump_info->dump_resp->work, ssr_dump_worker); + dump_info->dump_resp->qdev = qdev; + + dump_info->tbl_addr_dev = debug_info->tbl_addr; + dump_info->tbl_len = debug_info->tbl_len; + dump_info->resp_buf_sz = nr_page * PAGE_SIZE - + sizeof(*dump_info->dump_resp); + dump_info->chunk_sz = dump_info->resp_buf_sz - + sizeof(struct ssr_memory_read_rsp); + + dump_info->tbl_addr = vzalloc(dump_info->tbl_len); + if (!dump_info->tbl_addr) { + pci_warn(qdev->pdev, "Failed to allocate SSR table struct. Virtual memory requested %llu\n", + dump_info->tbl_len); + ret = -ENOMEM; + goto free_dump_resp; + } + + dump_info->mem_rd_buf = kzalloc(sizeof(*dump_info->mem_rd_buf), + GFP_KERNEL); + if (!dump_info->mem_rd_buf) { + pci_warn(qdev->pdev, "Failed to allocate memory read request buffer for MHI transactions. Physical memory requested %lu\n", + sizeof(*dump_info->mem_rd_buf)); + ret = -ENOMEM; + goto free_dump_tbl; + } + + return dump_info; + +free_dump_tbl: + vfree(dump_info->tbl_addr); +free_dump_resp: + kfree(dump_info->dump_resp); +free_dump_info: + kfree(dump_info); +out: + return ERR_PTR(ret); +} + +static void ssr_worker(struct work_struct *work) +{ + struct ssr_resp *resp = container_of(work, struct ssr_resp, work); + struct ssr_hdr *hdr = (struct ssr_hdr *)resp->data; + struct ssr_debug_transfer_info_rsp *debug_rsp; + struct ssr_debug_transfer_done_rsp *xfer_rsp; + struct ssr_debug_transfer_info *debug_info; + struct ssr_dump_info *dump_info = NULL; + struct qaic_device *qdev = resp->qdev; + struct ssr_event_rsp *event_rsp; + struct dma_bridge_chan *dbc; + struct ssr_event *event; + bool debug_nack = false; + u32 ssr_event_ack; + int ret; + + le32_to_cpus(&hdr->cmd); + le32_to_cpus(&hdr->len); + le32_to_cpus(&hdr->dbc_id); + + if (hdr->len > MSG_BUF_SZ) { + pci_warn(qdev->pdev, "Dropping SSR message with invalid len %d\n", hdr->len); + goto out; + } + + if (hdr->dbc_id >= qdev->num_dbc) { + pci_warn(qdev->pdev, "Dropping SSR message with invalid dbc_id %d\n", hdr->dbc_id); + goto out; + } + + dbc = &qdev->dbc[hdr->dbc_id]; + + switch (hdr->cmd) { + case DEBUG_TRANSFER_INFO: + trace_qaic_ssr_cmd(qdev, "SSR received DEBUG_TRANSFER_INFO command"); + debug_info = (struct ssr_debug_transfer_info *)resp->data; + + debug_rsp = kmalloc(sizeof(*debug_rsp), GFP_KERNEL); + if (!debug_rsp) + break; + + if (dbc->state != DBC_STATE_BEFORE_POWER_UP) { + /* NACK */ + pci_warn(qdev->pdev, "Invalid command received. DEBUG_TRANSFER_INFO is expected when DBC is in %d state and actual DBC state is %u. DBC ID %u.\n", + DBC_STATE_BEFORE_POWER_UP, dbc->state, + hdr->dbc_id); + debug_nack = true; + } + + /* Skip buffer allocations for Crashdump downloading */ + if (!debug_nack) { + /* Buffer for MEMORY READ request */ + dump_info = alloc_dump_info(qdev, debug_info); + if (IS_ERR(dump_info)) { + /* NACK */ + ret = PTR_ERR(dump_info); + pci_warn(qdev->pdev, "Failed to allocate dump resp memory %d. DBC ID %u.\n", + ret, hdr->dbc_id); + debug_nack = true; + } else { + /* ACK */ + debug_nack = false; + } + } + + debug_rsp->hdr.cmd = cpu_to_le32(DEBUG_TRANSFER_INFO_RSP); + debug_rsp->hdr.len = cpu_to_le32(sizeof(*debug_rsp)); + debug_rsp->hdr.dbc_id = cpu_to_le32(hdr->dbc_id); + /* 1 = NACK and 0 = ACK */ + debug_rsp->ret = cpu_to_le32(debug_nack ? 1 : 0); + + ret = mhi_queue_buf(qdev->ssr_ch, DMA_TO_DEVICE, + debug_rsp, sizeof(*debug_rsp), MHI_EOT); + if (ret) { + pci_warn(qdev->pdev, "Could not send DEBUG_TRANSFER_INFO_RSP %d\n", ret); + free_ssr_dump_buf(dump_info); + kfree(debug_rsp); + break; + } + + /* Command has been NACKed skip crashdump. */ + if (debug_nack) + break; + + dbc->dump_info = dump_info; + dump_info->dbc = dbc; + dump_info->resp = resp; + + trace_qaic_ssr_dump(qdev, "SSR debug table download initiated"); + ret = send_mem_rd(qdev, dump_info, dump_info->tbl_addr_dev, + min(dump_info->tbl_len, dump_info->chunk_sz)); + if (ret) { + free_ssr_dump_buf(dump_info); + break; + } + + /* + * Till now everything went fine, which means that we will be + * collecting crashdump chunk by chunk. Do not queue a response + * buffer for SSR cmds till the crashdump is complete. + */ + return; + case SSR_EVENT: + trace_qaic_ssr_cmd(qdev, "SSR received SSR_EVENT command"); + event = (struct ssr_event *)hdr; + le32_to_cpus(&event->event); + ssr_event_ack = event->event; + + switch (event->event) { + case BEFORE_SHUTDOWN: + trace_qaic_ssr_event(qdev, "SSR received BEFORE_SHUTDOWN event"); + set_dbc_state(qdev, hdr->dbc_id, + DBC_STATE_BEFORE_SHUTDOWN); + dbc_enter_ssr(qdev, hdr->dbc_id); + break; + case AFTER_SHUTDOWN: + trace_qaic_ssr_event(qdev, "SSR received AFTER_SHUTDOWN event"); + set_dbc_state(qdev, hdr->dbc_id, + DBC_STATE_AFTER_SHUTDOWN); + break; + case BEFORE_POWER_UP: + trace_qaic_ssr_event(qdev, "SSR received BEFORE_POWER_UP event"); + set_dbc_state(qdev, hdr->dbc_id, + DBC_STATE_BEFORE_POWER_UP); + break; + case AFTER_POWER_UP: + trace_qaic_ssr_event(qdev, "SSR received AFTER_POWER_UP event"); + /* + * If dump info is a non NULL value it means that we + * have received this SSR event while downloading a + * crashdump for this DBC is still in progress. NACK + * the SSR event + */ + if (dbc->dump_info) { + free_ssr_dump_buf(dbc->dump_info); + ssr_event_ack = SSR_EVENT_NACK; + break; + } + + set_dbc_state(qdev, hdr->dbc_id, + DBC_STATE_AFTER_POWER_UP); + break; + default: + pci_warn(qdev->pdev, "Unknown event %d\n", event->event); + break; + } + + event_rsp = kmalloc(sizeof(*event_rsp), GFP_KERNEL); + if (!event_rsp) + break; + + event_rsp->hdr.cmd = cpu_to_le32(SSR_EVENT_RSP); + event_rsp->hdr.len = cpu_to_le32(sizeof(*event_rsp)); + event_rsp->hdr.dbc_id = cpu_to_le32(hdr->dbc_id); + event_rsp->event = cpu_to_le32(ssr_event_ack); + + ret = mhi_queue_buf(qdev->ssr_ch, DMA_TO_DEVICE, + event_rsp, sizeof(*event_rsp), MHI_EOT); + if (ret) { + pci_warn(qdev->pdev, "Could not send SSR_EVENT_RSP %d\n", ret); + kfree(event_rsp); + } + + if (event->event == AFTER_POWER_UP && + ssr_event_ack != SSR_EVENT_NACK) { + dbc_exit_ssr(qdev, hdr->dbc_id); + set_dbc_state(qdev, hdr->dbc_id, DBC_STATE_IDLE); + } + + break; + case DEBUG_TRANSFER_DONE_RSP: + trace_qaic_ssr_cmd(qdev, "SSR received DEBUG_TRANSFER_DONE_RSP command"); + xfer_rsp = (struct ssr_debug_transfer_done_rsp *)hdr; + dump_info = dbc->dump_info; + + if (!dump_info) { + pci_warn(qdev->pdev, "Crashdump download is not in progress for this DBC ID %u\n", + hdr->dbc_id); + break; + } + + if (xfer_rsp->ret) { + pci_warn(qdev->pdev, "Device has NACKed SSR transfer done with %u\n", + xfer_rsp->ret); + free_ssr_dump_buf(dump_info); + break; + } + + dev_coredumpv(qdev->base_dev->ddev->dev, dump_info->dump_addr, + dump_info->dump_sz, GFP_KERNEL); + /* dev_coredumpv will free dump_info->dump_addr */ + dump_info->dump_addr = NULL; + free_ssr_dump_buf(dump_info); + + break; + default: + pci_warn(qdev->pdev, "Dropping SSR message with invalid cmd %d\n", hdr->cmd); + break; + } + +out: + ret = mhi_queue_buf(qdev->ssr_ch, DMA_FROM_DEVICE, resp->data, + MSG_BUF_SZ, MHI_EOT); + if (ret) { + pci_warn(qdev->pdev, "Could not requeue SSR recv buf %d\n", ret); + kfree(resp); + } +} + +static int qaic_ssr_mhi_probe(struct mhi_device *mhi_dev, + const struct mhi_device_id *id) +{ + struct qaic_device *qdev; + struct ssr_resp *resp; + int ret; + + qdev = pci_get_drvdata(to_pci_dev(mhi_dev->mhi_cntrl->cntrl_dev)); + + dev_set_drvdata(&mhi_dev->dev, qdev); + qdev->ssr_ch = mhi_dev; + ret = mhi_prepare_for_transfer(qdev->ssr_ch); + + if (ret) + return ret; + + resp = kmalloc(sizeof(*resp) + MSG_BUF_SZ, GFP_KERNEL); + if (!resp) { + mhi_unprepare_from_transfer(qdev->ssr_ch); + return -ENOMEM; + } + + resp->qdev = qdev; + INIT_WORK(&resp->work, ssr_worker); + + ret = mhi_queue_buf(qdev->ssr_ch, DMA_FROM_DEVICE, resp->data, + MSG_BUF_SZ, MHI_EOT); + if (ret) { + mhi_unprepare_from_transfer(qdev->ssr_ch); + kfree(resp); + return ret; + } + + return 0; +} + +static void qaic_ssr_mhi_remove(struct mhi_device *mhi_dev) +{ + struct qaic_device *qdev; + + qdev = dev_get_drvdata(&mhi_dev->dev); + mhi_unprepare_from_transfer(qdev->ssr_ch); + qdev->ssr_ch = NULL; +} + +static void qaic_ssr_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct qaic_device *qdev = dev_get_drvdata(&mhi_dev->dev); + struct _ssr_hdr *hdr = mhi_result->buf_addr; + struct ssr_dump_info *dump_info; + + if (mhi_result->transaction_status) { + kfree(mhi_result->buf_addr); + return; + } + + /* + * MEMORY READ is used to download crashdump. And crashdump is + * downloaded chunk by chunk in a series of MEMORY READ SSR commands. + * Hence to avoid too many kmalloc() and kfree() of the same MEMORY READ + * request buffer, we allocate only one such buffer and free it only + * once. + */ + dump_info = qdev->dbc[le32_to_cpu(hdr->dbc_id)].dump_info; + if (le32_to_cpu(hdr->cmd) == MEMORY_READ) { + dump_info->mem_rd_buf_queued = false; + return; + } + + kfree(mhi_result->buf_addr); +} + +static void qaic_ssr_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct ssr_resp *resp = container_of(mhi_result->buf_addr, + struct ssr_resp, data); + + if (mhi_result->transaction_status) { + kfree(resp); + return; + } + + queue_work(resp->qdev->ssr_wq, &resp->work); +} + +static const struct mhi_device_id qaic_ssr_mhi_match_table[] = { + { .chan = "QAIC_SSR", }, + {}, +}; + +static struct mhi_driver qaic_ssr_mhi_driver = { + .id_table = qaic_ssr_mhi_match_table, + .remove = qaic_ssr_mhi_remove, + .probe = qaic_ssr_mhi_probe, + .ul_xfer_cb = qaic_ssr_mhi_ul_xfer_cb, + .dl_xfer_cb = qaic_ssr_mhi_dl_xfer_cb, + .driver = { + .name = "qaic_ssr", + .owner = THIS_MODULE, + }, +}; + +void qaic_ssr_register(void) +{ + int ret; + + ret = mhi_driver_register(&qaic_ssr_mhi_driver); + if (ret) + pr_debug("qaic: ssr register failed %d\n", ret); +} + +void qaic_ssr_unregister(void) +{ + mhi_driver_unregister(&qaic_ssr_mhi_driver); +} diff --git a/drivers/gpu/drm/qaic/qaic_ssr.h b/drivers/gpu/drm/qaic/qaic_ssr.h new file mode 100644 index 0000000..a3a02f7 --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_ssr.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2020, The Linux Foundation. All rights reserved. + * Copyright (c) 2021 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __QAIC_SSR_H__ +#define __QAIC_SSR_H__ + +void qaic_ssr_register(void); +void qaic_ssr_unregister(void); +void clean_up_ssr(struct qaic_device *qdev, u32 dbc_id); +#endif /* __QAIC_SSR_H__ */ From patchwork Mon Aug 15 18:42:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597288 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8512DC28B2C for ; Mon, 15 Aug 2022 19:28:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244565AbiHOT2X (ORCPT ); Mon, 15 Aug 2022 15:28:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345006AbiHOT1g (ORCPT ); Mon, 15 Aug 2022 15:27:36 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DC212F39C; Mon, 15 Aug 2022 11:43:26 -0700 (PDT) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FIeG4o026049; Mon, 15 Aug 2022 18:43:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=jXwzBWngs45zRQ3sv9ZqWxPExpn7oQHe60w/RgQSYqA=; b=YEbhUEtNPySfs8fITGJgVjCB/QVh8KWHhB//LdLxY7Xiii4B6/P4hePBriH2mU5bCcn/ RNvwnnPBcLtRLd1vdoxH+4aMnlzy/nk3i7csogzduxTadCGPBdvOe6cdZPr6LXAzB+67 oWj342UXAzCvA8nGzWSCd1C8RvF/yzCIXseW8mW3pkPO06WZS8vgtJZvUYM2FXyPmY3A SrLH8G64q8ZQUacnn7kNM36pZB+1Oc2p5ltgC1DxfOwoWh55Zy6y8HL5wG6xq0J5iuFE g/hs14AfgivFjk6uGnbWTn9v2NOXqMpN7oyIfp0T9XXf05miDy4fkUP7CLnnehX3dz0/ TA== Received: from nalasppmta04.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx420p415-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:19 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIhI5R018200 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:18 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:17 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 10/14] drm/qaic: Add sysfs Date: Mon, 15 Aug 2022 12:42:32 -0600 Message-ID: <1660588956-24027-11-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: QF8DvXlWh8Wy0lXv4VivionsS1eJ3fTe X-Proofpoint-ORIG-GUID: QF8DvXlWh8Wy0lXv4VivionsS1eJ3fTe X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 clxscore=1015 lowpriorityscore=0 mlxlogscore=999 impostorscore=0 adultscore=0 mlxscore=0 spamscore=0 priorityscore=1501 suspectscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org The QAIC driver can advertise the state of individual dma_bridge channels to userspace. Userspace can use this information to manage userspace state when a channel crashes. Change-Id: Ifc7435c53cec6aa326bdcd9bfcb77ea7f2a63bab Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/qaic_sysfs.c | 113 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) create mode 100644 drivers/gpu/drm/qaic/qaic_sysfs.c diff --git a/drivers/gpu/drm/qaic/qaic_sysfs.c b/drivers/gpu/drm/qaic/qaic_sysfs.c new file mode 100644 index 0000000..5ee1696 --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_sysfs.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. */ + +#include +#include +#include +#include +#include +#include + +#include "qaic.h" + +#define NAME_LEN 14 + +struct dbc_attribute { + struct device_attribute dev_attr; + u32 dbc_id; + char name[NAME_LEN]; +}; + +static ssize_t dbc_state_show(struct device *dev, + struct device_attribute *a, char *buf) +{ + struct dbc_attribute *attr = container_of(a, struct dbc_attribute, dev_attr); + struct qaic_device *qdev = dev_get_drvdata(dev); + + return sprintf(buf, "%d\n", qdev->dbc[attr->dbc_id].state); +} + +void set_dbc_state(struct qaic_device *qdev, u32 dbc_id, unsigned int state) +{ + char id_str[12]; + char state_str[16]; + char *envp[] = { id_str, state_str, NULL }; + struct qaic_drm_device *qddev; + + if (state >= DBC_STATE_MAX) { + pci_dbg(qdev->pdev, "%s invalid state %d\n", __func__, state); + return; + } + if (dbc_id >= qdev->num_dbc) { + pci_dbg(qdev->pdev, "%s invalid dbc_id %d\n", __func__, dbc_id); + return; + } + if (state == qdev->dbc[dbc_id].state) { + pci_dbg(qdev->pdev, "%s already at state %d\n", __func__, state); + return; + } + + snprintf(id_str, ARRAY_SIZE(id_str), "DBC_ID=%d", dbc_id); + snprintf(state_str, ARRAY_SIZE(state_str), "DBC_STATE=%d", state); + + qdev->dbc[dbc_id].state = state; + mutex_lock(&qdev->qaic_drm_devices_mutex); + list_for_each_entry(qddev, &qdev->qaic_drm_devices, node) + kobject_uevent_env(&qddev->ddev->dev->kobj, KOBJ_CHANGE, envp); + mutex_unlock(&qdev->qaic_drm_devices_mutex); +} + +int qaic_sysfs_init(struct qaic_drm_device *qddev) +{ + u32 num_dbc = qddev->qdev->num_dbc; + struct dbc_attribute *dbc_attrs; + int i, ret; + + dbc_attrs = kcalloc(num_dbc, sizeof(*dbc_attrs), GFP_KERNEL); + if (!dbc_attrs) + return -ENOMEM; + + qddev->sysfs_attrs = dbc_attrs; + + for (i = 0; i < num_dbc; ++i) { + struct dbc_attribute *dbc = &dbc_attrs[i]; + + sysfs_attr_init(&dbc->dev_attr.attr); + dbc->dbc_id = i; + snprintf(dbc->name, NAME_LEN, "dbc%d_state", i); + dbc->dev_attr.attr.name = dbc->name; + dbc->dev_attr.attr.mode = 0444; + dbc->dev_attr.show = dbc_state_show; + ret = sysfs_create_file(&qddev->ddev->dev->kobj, + &dbc->dev_attr.attr); + if (ret) { + int j; + + for (j = 0; j < i; ++j) { + dbc = &dbc_attrs[j]; + sysfs_remove_file(&qddev->ddev->dev->kobj, + &dbc->dev_attr.attr); + } + break; + } + } + + if (ret) + kfree(dbc_attrs); + + return ret; +} + +void qaic_sysfs_remove(struct qaic_drm_device *qddev) +{ + struct dbc_attribute *dbc_attrs = qddev->sysfs_attrs; + u32 num_dbc = qddev->qdev->num_dbc; + int i; + + for (i = 0; i < num_dbc; ++i) + sysfs_remove_file(&qddev->ddev->dev->kobj, + &dbc_attrs[i].dev_attr.attr); + + kfree(dbc_attrs); +} From patchwork Mon Aug 15 18:42:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1AA9C28B2C for ; Mon, 15 Aug 2022 19:28:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244034AbiHOT21 (ORCPT ); Mon, 15 Aug 2022 15:28:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345024AbiHOT1i (ORCPT ); Mon, 15 Aug 2022 15:27:38 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F36CD5C34D; Mon, 15 Aug 2022 11:43:27 -0700 (PDT) Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FEqPCw019609; Mon, 15 Aug 2022 18:43:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=7k2KNNNcv1NlL20+H1Kb55Si1O03wQ/BH/0JPsdMe1M=; b=fgpmOyN4iBKm3QxlbLqnAJ530pD2Ri+yvSnxi654VPQZUFrleB+L+hpNY22sJkg1ndRE gtgLJxHBIJWRmNEecpCZG6NaRPG9KMJMPcxPh7GSQyiwKilmq957NhYfgnIdgDuEecBR JgZxWs30tqUOAT8ZiJhUOiPlgBABVBCDFaRhaa6/TI0Lkb4lMSVXzatC2+LFbfuGoASe Tg44Rqgg0WgP0uDrGhkyZY4eduYWDgP+SwZWO3Pm820dASXReU/h7y00hLkR0q0+fjZp pIR8MVUiW6y3Ez+pdco6d84aJYE16MWxtWeDS2YsYDPel9BOJq0D/1EG8MvfTNkJ3xWi GQ== Received: from nalasppmta01.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx54sx0vg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:21 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIhKge028382 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:20 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:19 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 11/14] drm/qaic: Add telemetry Date: Mon, 15 Aug 2022 12:42:33 -0600 Message-ID: <1660588956-24027-12-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 0WJtV8Um0yZAVkBFlPcCuXx89O8AamUl X-Proofpoint-ORIG-GUID: 0WJtV8Um0yZAVkBFlPcCuXx89O8AamUl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 priorityscore=1501 impostorscore=0 mlxlogscore=999 spamscore=0 mlxscore=0 clxscore=1015 suspectscore=0 bulkscore=0 lowpriorityscore=0 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org A QAIC device has a number of attributes like thermal limits which can be read and in some cases, controlled from the host. Expose these attributes via hwmon. Use the pre-defined interface where possible, but define custom interfaces where it is not possible. Change-Id: I3b559baed4016e27457658c9286f4c529f95dbbb Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/qaic_telemetry.c | 851 ++++++++++++++++++++++++++++++++++ drivers/gpu/drm/qaic/qaic_telemetry.h | 14 + 2 files changed, 865 insertions(+) create mode 100644 drivers/gpu/drm/qaic/qaic_telemetry.c create mode 100644 drivers/gpu/drm/qaic/qaic_telemetry.h diff --git a/drivers/gpu/drm/qaic/qaic_telemetry.c b/drivers/gpu/drm/qaic/qaic_telemetry.c new file mode 100644 index 0000000..44950d1 --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_telemetry.c @@ -0,0 +1,851 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. */ +/* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "qaic.h" +#include "qaic_telemetry.h" + +#if defined(CONFIG_QAIC_HWMON) + +#define MAGIC 0x55AA +#define VERSION 0x1 +#define RESP_TIMEOUT (1 * HZ) + +enum cmds { + CMD_THERMAL_SOC_TEMP, + CMD_THERMAL_SOC_MAX_TEMP, + CMD_THERMAL_BOARD_TEMP, + CMD_THERMAL_BOARD_MAX_TEMP, + CMD_THERMAL_DDR_TEMP, + CMD_THERMAL_WARNING_TEMP, + CMD_THERMAL_SHUTDOWN_TEMP, + CMD_CURRENT_TDP, + CMD_BOARD_POWER, + CMD_POWER_STATE, + CMD_POWER_MAX, + CMD_THROTTLE_PERCENT, + CMD_THROTTLE_TIME, + CMD_UPTIME, + CMD_THERMAL_SOC_FLOOR_TEMP, + CMD_THERMAL_SOC_CEILING_TEMP, +}; + +enum cmd_type { + TYPE_READ, /* read value from device */ + TYPE_WRITE, /* write value to device */ +}; + +enum msg_type { + MSG_PUSH, /* async push from device */ + MSG_REQ, /* sync request to device */ + MSG_RESP, /* sync response from device */ +}; + +struct telemetry_data { + u8 cmd; + u8 cmd_type; + u8 status; + __le64 val; /*signed*/ +} __packed; + +struct telemetry_header { + __le16 magic; + __le16 ver; + __le32 seq_num; + u8 type; + u8 id; + __le16 len; +} __packed; + +struct telemetry_msg { /* little endian encoded */ + struct telemetry_header hdr; + struct telemetry_data data; +} __packed; + +struct wrapper_msg { + struct kref ref_count; + struct telemetry_msg msg; +}; + +struct xfer_queue_elem { + /* + * Node in list of ongoing transfer request on telemetry channel. + * Maintained by root device struct + */ + struct list_head list; + /* Sequence number of this transfer request */ + u32 seq_num; + /* This is used to wait on until completion of transfer request */ + struct completion xfer_done; + /* Received data from device */ + void *buf; +}; + +struct resp_work { + /* Work struct to schedule work coming on QAIC_TELEMETRY channel */ + struct work_struct work; + /* Root struct of device, used to access device resources */ + struct qaic_device *qdev; + /* Buffer used by MHI for transfer requests */ + void *buf; +}; + +static void free_wrapper(struct kref *ref) +{ + struct wrapper_msg *wrapper = container_of(ref, struct wrapper_msg, + ref_count); + + kfree(wrapper); +} + +static int telemetry_request(struct qaic_device *qdev, u8 cmd, u8 cmd_type, + s64 *val) +{ + struct wrapper_msg *wrapper; + struct xfer_queue_elem elem; + struct telemetry_msg *resp; + struct telemetry_msg *req; + long ret = 0; + + wrapper = kzalloc(sizeof(*wrapper), GFP_KERNEL); + if (!wrapper) + return -ENOMEM; + + kref_init(&wrapper->ref_count); + req = &wrapper->msg; + + ret = mutex_lock_interruptible(&qdev->tele_mutex); + if (ret) + goto free_req; + + req->hdr.magic = cpu_to_le16(MAGIC); + req->hdr.ver = cpu_to_le16(VERSION); + req->hdr.seq_num = cpu_to_le32(qdev->tele_next_seq_num++); + req->hdr.type = MSG_REQ; + req->hdr.id = 0; + req->hdr.len = cpu_to_le16(sizeof(req->data)); + + req->data.cmd = cmd; + req->data.cmd_type = cmd_type; + req->data.status = 0; + if (cmd_type == TYPE_READ) + req->data.val = cpu_to_le64(0); + else + req->data.val = cpu_to_le64(*val); + + elem.seq_num = qdev->tele_next_seq_num - 1; + elem.buf = NULL; + init_completion(&elem.xfer_done); + if (likely(!qdev->tele_lost_buf)) { + resp = kmalloc(sizeof(*resp), GFP_KERNEL); + if (!resp) { + mutex_unlock(&qdev->tele_mutex); + ret = -ENOMEM; + goto free_req; + } + + ret = mhi_queue_buf(qdev->tele_ch, DMA_FROM_DEVICE, + resp, sizeof(*resp), MHI_EOT); + if (ret) { + mutex_unlock(&qdev->tele_mutex); + goto free_resp; + } + } else { + /* + * we lost a buffer because we queued a recv buf, but then + * queuing the corresponding tx buf failed. To try to avoid + * a memory leak, lets reclaim it and use it for this + * transaction. + */ + qdev->tele_lost_buf = false; + } + + kref_get(&wrapper->ref_count); + ret = mhi_queue_buf(qdev->tele_ch, DMA_TO_DEVICE, req, sizeof(*req), + MHI_EOT); + if (ret) { + qdev->tele_lost_buf = true; + kref_put(&wrapper->ref_count, free_wrapper); + mutex_unlock(&qdev->tele_mutex); + goto free_req; + } + + list_add_tail(&elem.list, &qdev->tele_xfer_list); + mutex_unlock(&qdev->tele_mutex); + + ret = wait_for_completion_interruptible_timeout(&elem.xfer_done, + RESP_TIMEOUT); + /* + * not using _interruptable because we have to cleanup or we'll + * likely cause memory corruption + */ + mutex_lock(&qdev->tele_mutex); + if (!list_empty(&elem.list)) + list_del(&elem.list); + if (!ret && !elem.buf) + ret = -ETIMEDOUT; + else if (ret > 0 && !elem.buf) + ret = -EIO; + mutex_unlock(&qdev->tele_mutex); + + resp = elem.buf; + + if (ret < 0) + goto free_resp; + + if (le16_to_cpu(resp->hdr.magic) != MAGIC || + le16_to_cpu(resp->hdr.ver) != VERSION || + resp->hdr.type != MSG_RESP || + resp->hdr.id != 0 || + le16_to_cpu(resp->hdr.len) != sizeof(resp->data) || + resp->data.cmd != cmd || + resp->data.cmd_type != cmd_type || + resp->data.status) { + ret = -EINVAL; + goto free_resp; + } + + if (cmd_type == TYPE_READ) + *val = le64_to_cpu(resp->data.val); + + ret = 0; + +free_resp: + kfree(resp); +free_req: + kref_put(&wrapper->ref_count, free_wrapper); + + return ret; +} + +static ssize_t throttle_percent_show(struct device *dev, + struct device_attribute *a, char *buf) +{ + struct qaic_device *qdev = dev_get_drvdata(dev); + s64 val = 0; + int rcu_id; + int ret; + + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return -ENODEV; + } + + ret = telemetry_request(qdev, CMD_THROTTLE_PERCENT, TYPE_READ, &val); + + if (ret) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; + } + + /* + * The percent the device performance is being throttled to meet + * the limits. IE performance is throttled 20% to meet power/thermal/ + * etc limits. + */ + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return sprintf(buf, "%lld\n", val); +} + +static SENSOR_DEVICE_ATTR_RO(throttle_percent, throttle_percent, 0); + +static ssize_t throttle_time_show(struct device *dev, + struct device_attribute *a, char *buf) +{ + struct qaic_device *qdev = dev_get_drvdata(dev); + s64 val = 0; + int rcu_id; + int ret; + + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return -ENODEV; + } + + ret = telemetry_request(qdev, CMD_THROTTLE_TIME, TYPE_READ, &val); + + if (ret) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; + } + + /* The time, in seconds, the device has been in a throttled state */ + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return sprintf(buf, "%lld\n", val); +} + +static SENSOR_DEVICE_ATTR_RO(throttle_time, throttle_time, 0); + +static ssize_t power_level_show(struct device *dev, struct device_attribute *a, + char *buf) +{ + struct qaic_device *qdev = dev_get_drvdata(dev); + s64 val = 0; + int rcu_id; + int ret; + + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return -ENODEV; + } + + ret = telemetry_request(qdev, CMD_POWER_STATE, TYPE_READ, &val); + + if (ret) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; + } + + /* + * Power level the device is operating at. What is the upper limit + * it is allowed to consume. + * 1 - full power + * 2 - reduced power + * 3 - minimal power + */ + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return sprintf(buf, "%lld\n", val); +} + +static ssize_t power_level_store(struct device *dev, struct device_attribute *a, + const char *buf, size_t count) +{ + struct qaic_device *qdev = dev_get_drvdata(dev); + int rcu_id; + s64 val; + int ret; + + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return -ENODEV; + } + + if (kstrtol(buf, 10, (long *)&val)) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return -EINVAL; + } + + ret = telemetry_request(qdev, CMD_POWER_STATE, TYPE_WRITE, &val); + + if (ret) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; + } + + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return count; +} + +static SENSOR_DEVICE_ATTR_RW(power_level, power_level, 0); + +static struct attribute *power_attrs[] = { + &sensor_dev_attr_power_level.dev_attr.attr, + &sensor_dev_attr_throttle_percent.dev_attr.attr, + &sensor_dev_attr_throttle_time.dev_attr.attr, + NULL, +}; + +static const struct attribute_group power_group = { + .attrs = power_attrs, +}; + +static ssize_t uptime_show(struct device *dev, + struct device_attribute *a, char *buf) +{ + struct qaic_device *qdev = dev_get_drvdata(dev); + s64 val = 0; + int rcu_id; + int ret; + + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return -ENODEV; + } + + ret = telemetry_request(qdev, CMD_UPTIME, TYPE_READ, &val); + + if (ret) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; + } + + /* The time, in seconds, the device has been up */ + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return sprintf(buf, "%lld\n", val); +} + +static SENSOR_DEVICE_ATTR_RO(uptime, uptime, 0); + +static struct attribute *uptime_attrs[] = { + &sensor_dev_attr_uptime.dev_attr.attr, + NULL, +}; + +static const struct attribute_group uptime_group = { + .attrs = uptime_attrs, +}; + +static ssize_t soc_temp_floor_show(struct device *dev, + struct device_attribute *a, char *buf) +{ + struct qaic_device *qdev = dev_get_drvdata(dev); + int rcu_id; + int ret; + s64 val; + + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto exit; + } + + ret = telemetry_request(qdev, CMD_THERMAL_SOC_FLOOR_TEMP, + TYPE_READ, &val); + if (ret) + goto exit; + + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return sprintf(buf, "%lld\n", val * 1000); + +exit: + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; +} + +static SENSOR_DEVICE_ATTR_RO(temp2_floor, soc_temp_floor, 0); + +static ssize_t soc_temp_ceiling_show(struct device *dev, + struct device_attribute *a, char *buf) +{ + struct qaic_device *qdev = dev_get_drvdata(dev); + int rcu_id; + int ret; + s64 val; + + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + ret = -ENODEV; + goto exit; + } + + ret = telemetry_request(qdev, CMD_THERMAL_SOC_CEILING_TEMP, + TYPE_READ, &val); + if (ret) + goto exit; + + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return sprintf(buf, "%lld\n", val * 1000); + +exit: + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; +} + +static SENSOR_DEVICE_ATTR_RO(temp2_ceiling, soc_temp_ceiling, 0); + +static struct attribute *temp2_attrs[] = { + &sensor_dev_attr_temp2_floor.dev_attr.attr, + &sensor_dev_attr_temp2_ceiling.dev_attr.attr, + NULL, +}; + +static const struct attribute_group temp2_group = { + .attrs = temp2_attrs, +}; + +static umode_t qaic_is_visible(const void *data, enum hwmon_sensor_types type, + u32 attr, int channel) +{ + switch (type) { + case hwmon_power: + switch (attr) { + case hwmon_power_max: + return 0644; + default: + return 0444; + } + break; + case hwmon_temp: + switch (attr) { + case hwmon_temp_input: + fallthrough; + case hwmon_temp_highest: + fallthrough; + case hwmon_temp_alarm: + return 0444; + case hwmon_temp_crit: + fallthrough; + case hwmon_temp_emergency: + return 0644; + } + break; + default: + return 0; + } + return 0; +} + +static int qaic_read(struct device *dev, enum hwmon_sensor_types type, + u32 attr, int channel, long *vall) +{ + struct qaic_device *qdev = dev_get_drvdata(dev); + int ret = -EOPNOTSUPP; + s64 val = 0; + int rcu_id; + u8 cmd; + + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return -ENODEV; + } + + switch (type) { + case hwmon_power: + switch (attr) { + case hwmon_power_max: + ret = telemetry_request(qdev, CMD_CURRENT_TDP, + TYPE_READ, &val); + val *= 1000000; + goto exit; + case hwmon_power_input: + ret = telemetry_request(qdev, CMD_BOARD_POWER, + TYPE_READ, &val); + val *= 1000000; + goto exit; + default: + goto exit; + } + case hwmon_temp: + switch (attr) { + case hwmon_temp_crit: + ret = telemetry_request(qdev, CMD_THERMAL_WARNING_TEMP, + TYPE_READ, &val); + val *= 1000; + goto exit; + case hwmon_temp_emergency: + ret = telemetry_request(qdev, CMD_THERMAL_SHUTDOWN_TEMP, + TYPE_READ, &val); + val *= 1000; + goto exit; + case hwmon_temp_alarm: + ret = telemetry_request(qdev, CMD_THERMAL_DDR_TEMP, + TYPE_READ, &val); + goto exit; + case hwmon_temp_input: + if (channel == 0) + cmd = CMD_THERMAL_BOARD_TEMP; + else if (channel == 1) + cmd = CMD_THERMAL_SOC_TEMP; + else + goto exit; + ret = telemetry_request(qdev, cmd, TYPE_READ, &val); + val *= 1000; + goto exit; + case hwmon_temp_highest: + if (channel == 0) + cmd = CMD_THERMAL_BOARD_MAX_TEMP; + else if (channel == 1) + cmd = CMD_THERMAL_SOC_MAX_TEMP; + else + goto exit; + ret = telemetry_request(qdev, cmd, TYPE_READ, &val); + val *= 1000; + goto exit; + default: + goto exit; + } + default: + goto exit; + } + +exit: + *vall = (long)val; + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; +} + +static int qaic_write(struct device *dev, enum hwmon_sensor_types type, + u32 attr, int channel, long vall) +{ + struct qaic_device *qdev = dev_get_drvdata(dev); + int ret = -EOPNOTSUPP; + int rcu_id; + s64 val; + + val = vall; + rcu_id = srcu_read_lock(&qdev->dev_lock); + if (qdev->in_reset) { + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return -ENODEV; + } + + switch (type) { + case hwmon_power: + switch (attr) { + case hwmon_power_max: + val /= 1000000; + ret = telemetry_request(qdev, CMD_CURRENT_TDP, + TYPE_WRITE, &val); + goto exit; + default: + goto exit; + } + case hwmon_temp: + switch (attr) { + case hwmon_temp_crit: + val /= 1000; + ret = telemetry_request(qdev, CMD_THERMAL_WARNING_TEMP, + TYPE_WRITE, &val); + goto exit; + case hwmon_temp_emergency: + val /= 1000; + ret = telemetry_request(qdev, CMD_THERMAL_SHUTDOWN_TEMP, + TYPE_WRITE, &val); + goto exit; + default: + goto exit; + } + default: + goto exit; + } + +exit: + srcu_read_unlock(&qdev->dev_lock, rcu_id); + return ret; +} + +static const struct attribute_group *special_groups[] = { + &power_group, + &uptime_group, + &temp2_group, + NULL, +}; + +static const struct hwmon_ops qaic_ops = { + .is_visible = qaic_is_visible, + .read = qaic_read, + .write = qaic_write, +}; + +static const u32 qaic_config_temp[] = { + /* board level */ + HWMON_T_INPUT | HWMON_T_HIGHEST, + /* SoC level */ + HWMON_T_INPUT | HWMON_T_HIGHEST | HWMON_T_CRIT | HWMON_T_EMERGENCY, + /* DDR level */ + HWMON_T_ALARM, + 0 +}; + +static const struct hwmon_channel_info qaic_temp = { + .type = hwmon_temp, + .config = qaic_config_temp, +}; + +static const u32 qaic_config_power[] = { + HWMON_P_INPUT | HWMON_P_MAX, /* board level */ + 0 +}; + +static const struct hwmon_channel_info qaic_power = { + .type = hwmon_power, + .config = qaic_config_power, +}; + +static const struct hwmon_channel_info *qaic_info[] = { + &qaic_power, + &qaic_temp, + NULL +}; + +static const struct hwmon_chip_info qaic_chip_info = { + .ops = &qaic_ops, + .info = qaic_info +}; + +static int qaic_telemetry_mhi_probe(struct mhi_device *mhi_dev, + const struct mhi_device_id *id) +{ + struct qaic_device *qdev; + int ret; + + qdev = pci_get_drvdata(to_pci_dev(mhi_dev->mhi_cntrl->cntrl_dev)); + + dev_set_drvdata(&mhi_dev->dev, qdev); + qdev->tele_ch = mhi_dev; + qdev->tele_lost_buf = false; + ret = mhi_prepare_for_transfer(qdev->tele_ch); + + if (ret) + return ret; + + qdev->hwmon = hwmon_device_register_with_info(&qdev->pdev->dev, "qaic", + qdev, &qaic_chip_info, + special_groups); + if (!qdev->hwmon) { + mhi_unprepare_from_transfer(qdev->tele_ch); + return -ENODEV; + } + + return 0; +} + +static void qaic_telemetry_mhi_remove(struct mhi_device *mhi_dev) +{ + struct qaic_device *qdev; + + qdev = dev_get_drvdata(&mhi_dev->dev); + hwmon_device_unregister(qdev->hwmon); + mhi_unprepare_from_transfer(qdev->tele_ch); + qdev->tele_ch = NULL; + qdev->hwmon = NULL; +} + +static void resp_worker(struct work_struct *work) +{ + struct resp_work *resp = container_of(work, struct resp_work, work); + struct qaic_device *qdev = resp->qdev; + struct telemetry_msg *msg = resp->buf; + struct xfer_queue_elem *elem; + struct xfer_queue_elem *i; + bool found = false; + + if (le16_to_cpu(msg->hdr.magic) != MAGIC) { + kfree(msg); + kfree(resp); + return; + } + + mutex_lock(&qdev->tele_mutex); + list_for_each_entry_safe(elem, i, &qdev->tele_xfer_list, list) { + if (elem->seq_num == le32_to_cpu(msg->hdr.seq_num)) { + found = true; + list_del_init(&elem->list); + elem->buf = msg; + complete_all(&elem->xfer_done); + break; + } + } + mutex_unlock(&qdev->tele_mutex); + + if (!found) + /* request must have timed out, drop packet */ + kfree(msg); + + kfree(resp); +} + +static void qaic_telemetry_mhi_ul_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct telemetry_msg *msg = mhi_result->buf_addr; + struct wrapper_msg *wrapper = container_of(msg, struct wrapper_msg, + msg); + + kref_put(&wrapper->ref_count, free_wrapper); +} + +static void qaic_telemetry_mhi_dl_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *mhi_result) +{ + struct qaic_device *qdev = dev_get_drvdata(&mhi_dev->dev); + struct telemetry_msg *msg = mhi_result->buf_addr; + struct resp_work *resp; + + if (mhi_result->transaction_status) { + kfree(msg); + return; + } + + resp = kmalloc(sizeof(*resp), GFP_ATOMIC); + if (!resp) { + pci_err(qdev->pdev, "dl_xfer_cb alloc fail, dropping message\n"); + kfree(msg); + return; + } + + INIT_WORK(&resp->work, resp_worker); + resp->qdev = qdev; + resp->buf = msg; + queue_work(qdev->tele_wq, &resp->work); +} + +static const struct mhi_device_id qaic_telemetry_mhi_match_table[] = { + { .chan = "QAIC_TELEMETRY", }, + {}, +}; + +static struct mhi_driver qaic_telemetry_mhi_driver = { + .id_table = qaic_telemetry_mhi_match_table, + .remove = qaic_telemetry_mhi_remove, + .probe = qaic_telemetry_mhi_probe, + .ul_xfer_cb = qaic_telemetry_mhi_ul_xfer_cb, + .dl_xfer_cb = qaic_telemetry_mhi_dl_xfer_cb, + .driver = { + .name = "qaic_telemetry", + .owner = THIS_MODULE, + }, +}; + +void qaic_telemetry_register(void) +{ + int ret; + + ret = mhi_driver_register(&qaic_telemetry_mhi_driver); + if (ret) + pr_debug("qaic: telemetry register failed %d\n", ret); +} + +void qaic_telemetry_unregister(void) +{ + mhi_driver_unregister(&qaic_telemetry_mhi_driver); +} + +void wake_all_telemetry(struct qaic_device *qdev) +{ + struct xfer_queue_elem *elem; + struct xfer_queue_elem *i; + + mutex_lock(&qdev->tele_mutex); + list_for_each_entry_safe(elem, i, &qdev->tele_xfer_list, list) { + list_del_init(&elem->list); + complete_all(&elem->xfer_done); + } + qdev->tele_lost_buf = false; + mutex_unlock(&qdev->tele_mutex); +} + +#else + +void qaic_telemetry_register(void) +{ +} + +void qaic_telemetry_unregister(void) +{ +} + +void wake_all_telemetry(struct qaic_device *qdev) +{ +} + +#endif /* CONFIG_QAIC_HWMON */ diff --git a/drivers/gpu/drm/qaic/qaic_telemetry.h b/drivers/gpu/drm/qaic/qaic_telemetry.h new file mode 100644 index 0000000..01e178f4 --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_telemetry.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2020, The Linux Foundation. All rights reserved. + */ + +#ifndef __QAIC_TELEMETRY_H__ +#define __QAIC_TELEMETRY_H__ + +#include "qaic.h" + +void qaic_telemetry_register(void); +void qaic_telemetry_unregister(void); +void wake_all_telemetry(struct qaic_device *qdev); +#endif /* __QAIC_TELEMETRY_H__ */ From patchwork Mon Aug 15 18:42:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597285 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB946C28B2B for ; Mon, 15 Aug 2022 19:28:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245133AbiHOT2a (ORCPT ); Mon, 15 Aug 2022 15:28:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345015AbiHOT1h (ORCPT ); Mon, 15 Aug 2022 15:27:37 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 956662F651; Mon, 15 Aug 2022 11:43:28 -0700 (PDT) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FI1EeB005530; Mon, 15 Aug 2022 18:43:22 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=p8kQzWwmkE3KMqRHvpu/MqXRiD9ltGQciMmCkDrQn9Q=; b=I9prnWv6Yr99e+hS6QBa6IYFkQM2TKKcIh2BE4CSm7A7VcJNrbu15B7r5HqKb3sXGgr7 VXf8Vj7QhqoPTHqgHcKkA+KRSKhLmh2G3ql6Pdu2OQhkTAyO+2IzAqlJNEcaMMg9wap5 nM9DcTutXFwwd9rLmTHn8IHnJMM6UWWoXr1FfOWL4jJeEpKfg3yi9U/5c2eontktY2Ot NoQ+kLKiJsdp8Hvl8vNIS0JsuxLXOX7Q6ID5OeT0SrF1Y8l96qPn/6+XePOSKaABOrqa kothYh6va8PsXQE8J4wtn8fvP5pzAUOVRg8kdHyGe+rOoQvEskhmfvHqfUJ9C5yz2gea bQ== Received: from nalasppmta01.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx4qpx2my-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:22 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIhM1t028389 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:22 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:21 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 12/14] drm/qaic: Add tracepoints Date: Mon, 15 Aug 2022 12:42:34 -0600 Message-ID: <1660588956-24027-13-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: NqSPTkvZQCQvixCAOwq8der_Fvl0CFQP X-Proofpoint-GUID: NqSPTkvZQCQvixCAOwq8der_Fvl0CFQP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 lowpriorityscore=0 suspectscore=0 clxscore=1015 mlxscore=0 phishscore=0 mlxlogscore=999 priorityscore=1501 malwarescore=0 impostorscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add QAIC specific tracepoints which can be useful in debugging issues. Change-Id: I8cde015990d5a3482dbba142cf0a4bbb4512cb02 Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/qaic/qaic_trace.h | 493 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 493 insertions(+) create mode 100644 drivers/gpu/drm/qaic/qaic_trace.h diff --git a/drivers/gpu/drm/qaic/qaic_trace.h b/drivers/gpu/drm/qaic/qaic_trace.h new file mode 100644 index 0000000..0be824eb --- /dev/null +++ b/drivers/gpu/drm/qaic/qaic_trace.h @@ -0,0 +1,493 @@ +/* SPDX-License-Identifier: GPL-2.0-only + * + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#if !defined(_TRACE_QAIC_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_QAIC_H +#include +#include + +#undef TRACE_SYSTEM +#define TRACE_SYSTEM qaic +#define TRACE_INCLUDE_FILE qaic_trace +#define TRACE_INCLUDE_PATH ../../drivers/gpu/drm/qaic + +TRACE_EVENT(qaic_ioctl, + TP_PROTO(struct qaic_device *qdev, struct qaic_user *usr, + unsigned int cmd, bool in), + TP_ARGS(qdev, usr, cmd, in), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(unsigned int, user) + __field(unsigned int, cmd) + __field(unsigned int, type) + __field(unsigned int, nr) + __field(unsigned int, size) + __field(unsigned int, dir) + __field(bool, in) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->user = usr->handle; + __entry->cmd = cmd; + __entry->type = _IOC_TYPE(cmd); + __entry->nr = _IOC_NR(cmd); + __entry->size = _IOC_SIZE(cmd); + __entry->dir = _IOC_DIR(cmd); + __entry->in = in; + ), + TP_printk("%s:%s user:%d cmd:0x%x (%c nr=%d len=%d dir=%d)", + __entry->in ? "Entry" : "Exit", __get_str(device), + __entry->user, __entry->cmd, __entry->type, __entry->nr, + __entry->size, __entry->dir) +); + +TRACE_EVENT(qaic_mhi_queue_error, + TP_PROTO(struct qaic_device *qdev, const char *msg, int ret), + TP_ARGS(qdev, msg, ret), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __string(msg, msg) + __field(int, ret) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __assign_str(msg, msg); + __entry->ret = ret; + ), + TP_printk("%s %s %d", + __get_str(device), __get_str(msg), __entry->ret) +); + +DECLARE_EVENT_CLASS(qaic_manage_error, + TP_PROTO(struct qaic_device *qdev, struct qaic_user *usr, + const char *msg), + TP_ARGS(qdev, usr, msg), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(unsigned int, user) + __string(msg, msg) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->user = usr->handle; + __assign_str(msg, msg); + ), + TP_printk("%s user:%d %s", + __get_str(device), __entry->user, __get_str(msg)) +); + +DEFINE_EVENT(qaic_manage_error, manage_error, + TP_PROTO(struct qaic_device *qdev, struct qaic_user *usr, + const char *msg), + TP_ARGS(qdev, usr, msg) +); + +DECLARE_EVENT_CLASS(qaic_encdec_error, + TP_PROTO(struct qaic_device *qdev, const char *msg), + TP_ARGS(qdev, msg), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __string(msg, msg) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __assign_str(msg, msg); + ), + TP_printk("%s %s", __get_str(device), __get_str(msg)) +); + +DEFINE_EVENT(qaic_encdec_error, encode_error, + TP_PROTO(struct qaic_device *qdev, const char *msg), + TP_ARGS(qdev, msg) +); + +DEFINE_EVENT(qaic_encdec_error, decode_error, + TP_PROTO(struct qaic_device *qdev, const char *msg), + TP_ARGS(qdev, msg) +); + +TRACE_EVENT(qaic_control_dbg, + TP_PROTO(struct qaic_device *qdev, const char *msg, int ret), + TP_ARGS(qdev, msg, ret), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __string(msg, msg) + __field(int, ret) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __assign_str(msg, msg); + __entry->ret = ret; + ), + TP_printk("%s %s %d", + __get_str(device), __get_str(msg), __entry->ret) +); + +TRACE_EVENT(qaic_encode_passthrough, + TP_PROTO(struct qaic_device *qdev, + struct qaic_manage_trans_passthrough *in_trans), + TP_ARGS(qdev, in_trans), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(__u32, len) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->len = in_trans->hdr.len; + ), + TP_printk("%s len %u", __get_str(device), __entry->len) +); + +TRACE_EVENT(qaic_encode_dma, + TP_PROTO(struct qaic_device *qdev, + struct qaic_manage_trans_dma_xfer *in_trans), + TP_ARGS(qdev, in_trans), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(__u32, len) + __field(__u32, tag) + __field(__u32, count) + __field(__u64, addr) + __field(__u64, size) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->len = in_trans->hdr.len; + __entry->tag = in_trans->tag; + __entry->count = in_trans->count; + __entry->addr = in_trans->addr; + __entry->size = in_trans->size; + ), + TP_printk("%s len %u tag %u count %u address 0x%llx size %llu", + __get_str(device), __entry->len, __entry->tag, __entry->count, + __entry->addr, __entry->size) +); + +TRACE_EVENT(qaic_encode_activate, + TP_PROTO(struct qaic_device *qdev, + struct qaic_manage_trans_activate_to_dev *in_trans), + TP_ARGS(qdev, in_trans), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(__u32, len) + __field(__u32, queue_size) + __field(__u32, eventfd) + __field(__u32, options) + __field(__u32, pad) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->len = in_trans->hdr.len; + __entry->queue_size = in_trans->queue_size; + __entry->eventfd = in_trans->eventfd; + __entry->options = in_trans->options; + __entry->pad = in_trans->pad; + ), + TP_printk("%s len %u queue_size %u eventfd %u options %u pad %u", + __get_str(device), __entry->len, __entry->queue_size, + __entry->eventfd, __entry->options, __entry->pad) +); + +TRACE_EVENT(qaic_encode_deactivate, + TP_PROTO(struct qaic_device *qdev, + struct qaic_manage_trans_deactivate *in_trans), + TP_ARGS(qdev, in_trans), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(__u32, len) + __field(__u32, dbc_id) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->len = in_trans->hdr.len; + __entry->dbc_id = in_trans->dbc_id; + ), + TP_printk("%s len %u dbc_id %u", + __get_str(device), __entry->len, __entry->dbc_id) +); + +TRACE_EVENT(qaic_encode_status, + TP_PROTO(struct qaic_device *qdev, + struct qaic_manage_trans_status_to_dev *in_trans), + TP_ARGS(qdev, in_trans), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(__u32, len) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->len = in_trans->hdr.len; + ), + TP_printk("%s len %u", __get_str(device), __entry->len) +); + +TRACE_EVENT(qaic_decode_passthrough, + TP_PROTO(struct qaic_device *qdev, + struct qaic_manage_trans_passthrough *out_trans), + TP_ARGS(qdev, out_trans), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(__u32, len) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->len = out_trans->hdr.len; + ), + TP_printk("%s len %u", __get_str(device), __entry->len) +); + +TRACE_EVENT(qaic_decode_activate, + TP_PROTO(struct qaic_device *qdev, + struct qaic_manage_trans_activate_from_dev *out_trans), + TP_ARGS(qdev, out_trans), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(__u32, len) + __field(__u32, status) + __field(__u32, dbc_id) + __field(__u64, options) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->len = out_trans->hdr.len; + __entry->status = out_trans->status; + __entry->dbc_id = out_trans->dbc_id; + __entry->options = out_trans->options; + ), + TP_printk("%s len %u status %u dbc_id %u options %llu", + __get_str(device), __entry->len, __entry->status, + __entry->dbc_id, __entry->options) +); + +TRACE_EVENT(qaic_decode_deactivate, + TP_PROTO(struct qaic_device *qdev, u32 dbc_id, u32 status), + TP_ARGS(qdev, dbc_id, status), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(u32, dbc_id) + __field(u32, status) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->dbc_id = dbc_id; + __entry->status = status; + ), + TP_printk("%s dbc_id %u status %u", + __get_str(device), __entry->dbc_id, __entry->status) +); + +TRACE_EVENT(qaic_decode_status, + TP_PROTO(struct qaic_device *qdev, + struct qaic_manage_trans_status_from_dev *out_trans), + TP_ARGS(qdev, out_trans), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __field(__u32, len) + __field(__u16, major) + __field(__u16, minor) + __field(__u32, status) + __field(__u64, status_flags) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __entry->len = out_trans->hdr.len; + __entry->major = out_trans->major; + __entry->minor = out_trans->minor; + __entry->status = out_trans->status; + __entry->status_flags = out_trans->status_flags; + ), + TP_printk("%s len %u major %u minor %u status %u flags 0x%llx", + __get_str(device), __entry->len, __entry->major, __entry->minor, + __entry->status, __entry->status_flags) +); + +DECLARE_EVENT_CLASS(qaic_data_err, + TP_PROTO(struct qaic_device *qdev, const char *msg, int ret), + TP_ARGS(qdev, msg, ret), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __string(msg, msg) + __field(int, ret) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __assign_str(msg, msg); + __entry->ret = ret; + ), + TP_printk("%s %s %d", __get_str(device), __get_str(msg), __entry->ret) +); + +DEFINE_EVENT(qaic_data_err, qaic_mem_err, + TP_PROTO(struct qaic_device *qdev, const char *msg, int ret), + TP_ARGS(qdev, msg, ret) +); + +DEFINE_EVENT(qaic_data_err, qaic_mmap_err, + TP_PROTO(struct qaic_device *qdev, const char *msg, int ret), + TP_ARGS(qdev, msg, ret) +); + +DEFINE_EVENT(qaic_data_err, qaic_exec_err, + TP_PROTO(struct qaic_device *qdev, const char *msg, int ret), + TP_ARGS(qdev, msg, ret) +); + +DEFINE_EVENT(qaic_data_err, qaic_wait_err, + TP_PROTO(struct qaic_device *qdev, const char *msg, int ret), + TP_ARGS(qdev, msg, ret) +); + +DEFINE_EVENT(qaic_data_err, qaic_stats_err, + TP_PROTO(struct qaic_device *qdev, const char *msg, int ret), + TP_ARGS(qdev, msg, ret) +); + +DEFINE_EVENT(qaic_data_err, qaic_util_err, + TP_PROTO(struct qaic_device *qdev, const char *msg, int ret), + TP_ARGS(qdev, msg, ret) +); + +DEFINE_EVENT(qaic_data_err, qaic_attach_err, + TP_PROTO(struct qaic_device *qdev, const char *msg, int ret), + TP_ARGS(qdev, msg, ret) +); + +DECLARE_EVENT_CLASS(qaic_data_err_1, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + int ret, u64 var1), + TP_ARGS(qdev, msg, msg_var1, ret, var1), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __string(msg, msg) + __string(msg_var1, msg_var1) + __field(int, ret) + __field(u64, var1) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __assign_str(msg, msg); + __assign_str(msg_var1, msg_var1); + __entry->ret = ret; + __entry->var1 = var1; + ), + TP_printk("%s %s Error:%d %s:%llu", + __get_str(device), __get_str(msg), __entry->ret, + __get_str(msg_var1), __entry->var1) +); + +DEFINE_EVENT(qaic_data_err_1, qaic_mem_err_1, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + int ret, u64 var1), + TP_ARGS(qdev, msg, msg_var1, ret, var1) +); + +DEFINE_EVENT(qaic_data_err_1, qaic_mmap_err_1, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + int ret, u64 var1), + TP_ARGS(qdev, msg, msg_var1, ret, var1) +); + +DEFINE_EVENT(qaic_data_err_1, qaic_attach_err_1, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + int ret, u64 var1), + TP_ARGS(qdev, msg, msg_var1, ret, var1) +); + +DEFINE_EVENT(qaic_data_err_1, qaic_exec_err_1, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + int ret, u64 var1), + TP_ARGS(qdev, msg, msg_var1, ret, var1) +); + +DEFINE_EVENT(qaic_data_err_1, qaic_wait_err_1, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + int ret, u64 var1), + TP_ARGS(qdev, msg, msg_var1, ret, var1) +); + +DEFINE_EVENT(qaic_data_err_1, qaic_stats_err_1, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + int ret, u64 var1), + TP_ARGS(qdev, msg, msg_var1, ret, var1) +); + +DECLARE_EVENT_CLASS(qaic_data_err_2, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + const char *msg_var2, int ret, u64 var1, u64 var2), + TP_ARGS(qdev, msg, msg_var1, msg_var2, ret, var1, var2), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __string(msg, msg) + __string(msg_var1, msg_var1) + __string(msg_var2, msg_var2) + __field(int, ret) + __field(u64, var1) + __field(u64, var2) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __assign_str(msg, msg); + __assign_str(msg_var1, msg_var1); + __assign_str(msg_var2, msg_var2); + __entry->ret = ret; + __entry->var1 = var1; + __entry->var2 = var2; + ), + TP_printk("%s %s Error:%d %s:%llu %s:%llu", + __get_str(device), __get_str(msg), __entry->ret, + __get_str(msg_var1), __entry->var1, + __get_str(msg_var2), __entry->var2) +); + +DEFINE_EVENT(qaic_data_err_2, qaic_mem_err_2, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + const char *msg_var2, int ret, u64 var1, u64 var2), + TP_ARGS(qdev, msg, msg_var1, msg_var2, ret, var1, var2) +); + +DEFINE_EVENT(qaic_data_err_2, qaic_attach_err_2, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + const char *msg_var2, int ret, u64 var1, u64 var2), + TP_ARGS(qdev, msg, msg_var1, msg_var2, ret, var1, var2) +); + +DEFINE_EVENT(qaic_data_err_2, qaic_exec_err_2, + TP_PROTO(struct qaic_device *qdev, const char *msg, const char *msg_var1, + const char *msg_var2, int ret, u64 var1, u64 var2), + TP_ARGS(qdev, msg, msg_var1, msg_var2, ret, var1, var2) +); + +DECLARE_EVENT_CLASS(qaic_ssr, + TP_PROTO(struct qaic_device *qdev, const char *msg), + TP_ARGS(qdev, msg), + TP_STRUCT__entry( + __string(device, dev_name(&qdev->pdev->dev)) + __string(msg, msg) + ), + TP_fast_assign( + __assign_str(device, dev_name(&qdev->pdev->dev)); + __assign_str(msg, msg); + ), + TP_printk("%s %s", __get_str(device), __get_str(msg)) +); + +DEFINE_EVENT(qaic_ssr, qaic_ssr_cmd, + TP_PROTO(struct qaic_device *qdev, const char *msg), + TP_ARGS(qdev, msg) +); + +DEFINE_EVENT(qaic_ssr, qaic_ssr_event, + TP_PROTO(struct qaic_device *qdev, const char *msg), + TP_ARGS(qdev, msg) +); + +DEFINE_EVENT(qaic_ssr, qaic_ssr_dump, + TP_PROTO(struct qaic_device *qdev, const char *msg), + TP_ARGS(qdev, msg) +); + +#endif /* _TRACE_QAIC_H */ +#include From patchwork Mon Aug 15 18:42:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597284 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80C87C00140 for ; Mon, 15 Aug 2022 19:28:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245297AbiHOT2e (ORCPT ); Mon, 15 Aug 2022 15:28:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345033AbiHOT1i (ORCPT ); Mon, 15 Aug 2022 15:27:38 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F0AC5C359; Mon, 15 Aug 2022 11:43:29 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FHiuJI010929; Mon, 15 Aug 2022 18:43:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=Mh7Y4USC1d7L8QNdWdLQgG/3ut7oqCwy46/BkS3TzvY=; b=L/j8RTudLgxBC+ngv7q2dcNSpcvgj157sHkBKCYEXN05UePZ/kujbaP1X6G6KNl6jGWF rZiAatrALaV9xB+CO6HmtvhZGObLspVhMf5cuAbn0YmdkaqeYFOodonIo73WyqfjEPLn HrgF2I/qpOiV6yhfZasDhzuoxMnuqZKBVhFcMZiHExi3BGgidrRT9xnSlXb70Ibqex4O KC8dTwlp4FdgGmCbTHlm6EXm0i9S7zF0pXAb3UgtquWLdWO3Lf40HInUe4Ea1k/GlgXx ejO+nVlvEnrLaBgHbsU7e5vpBiQYcPdPxEgoBuseV8pHEo/eyOdp9SrgyVfZqrxbIDT/ zw== Received: from nalasppmta01.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx2wxx74d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:24 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIhN9r028407 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:23 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:22 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 13/14] drm/qaic: Add qaic driver to the build system Date: Mon, 15 Aug 2022 12:42:35 -0600 Message-ID: <1660588956-24027-14-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: iEkPC_91KBgWnqbZv4TuugVKsNIMsjP- X-Proofpoint-GUID: iEkPC_91KBgWnqbZv4TuugVKsNIMsjP- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 phishscore=0 impostorscore=0 malwarescore=0 lowpriorityscore=0 mlxscore=0 bulkscore=0 clxscore=1015 priorityscore=1501 spamscore=0 mlxlogscore=598 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add the infrastructure that allows the QAIC driver to be built. Change-Id: I5b609b2e91b6a99939bdac35849813263ad874af Signed-off-by: Jeffrey Hugo --- drivers/gpu/drm/Kconfig | 2 ++ drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/qaic/Kconfig | 33 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/qaic/Makefile | 17 +++++++++++++++++ 4 files changed, 53 insertions(+) create mode 100644 drivers/gpu/drm/qaic/Kconfig create mode 100644 drivers/gpu/drm/qaic/Makefile diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index b1f22e4..b614940 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -390,6 +390,8 @@ source "drivers/gpu/drm/gud/Kconfig" source "drivers/gpu/drm/sprd/Kconfig" +source "drivers/gpu/drm/qaic/Kconfig" + config DRM_HYPERV tristate "DRM Support for Hyper-V synthetic video device" depends on DRM && PCI && MMU && HYPERV diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 301a44d..28b0f1b 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -135,3 +135,4 @@ obj-y += xlnx/ obj-y += gud/ obj-$(CONFIG_DRM_HYPERV) += hyperv/ obj-$(CONFIG_DRM_SPRD) += sprd/ +obj-$(CONFIG_DRM_QAIC) += qaic/ diff --git a/drivers/gpu/drm/qaic/Kconfig b/drivers/gpu/drm/qaic/Kconfig new file mode 100644 index 0000000..eca2bcb --- /dev/null +++ b/drivers/gpu/drm/qaic/Kconfig @@ -0,0 +1,33 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Qualcomm Cloud AI accelerators driver +# + +config DRM_QAIC + tristate "Qualcomm Cloud AI accelerators" + depends on PCI && HAS_IOMEM + depends on MHI_BUS + depends on DRM + depends on MMU + select CRC32 + help + Enables driver for Qualcomm's Cloud AI accelerator PCIe cards that are + designed to accelerate Deep Learning inference workloads. + + The driver manages the PCIe devices and provides an IOCTL interface + for users to submit workloads to the devices. + + If unsure, say N. + + To compile this driver as a module, choose M here: the + module will be called qaic. + +config QAIC_HWMON + bool "Qualcomm Cloud AI accelerator telemetry" + depends on DRM_QAIC + depends on HWMON + help + Enables telemetry via the HWMON interface for Qualcomm's Cloud AI + accelerator PCIe cards. + + If unsure, say N. diff --git a/drivers/gpu/drm/qaic/Makefile b/drivers/gpu/drm/qaic/Makefile new file mode 100644 index 0000000..4a5daff --- /dev/null +++ b/drivers/gpu/drm/qaic/Makefile @@ -0,0 +1,17 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Makefile for Qualcomm Cloud AI accelerators driver +# + +obj-$(CONFIG_DRM_QAIC) := qaic.o + +qaic-y := \ + qaic_drv.o \ + mhi_controller.o \ + qaic_control.o \ + qaic_data.o \ + qaic_debugfs.o \ + qaic_telemetry.o \ + qaic_ras.o \ + qaic_ssr.o \ + qaic_sysfs.o From patchwork Mon Aug 15 18:42:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Hugo X-Patchwork-Id: 597577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44A81C25B0E for ; Mon, 15 Aug 2022 19:28:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245055AbiHOT23 (ORCPT ); Mon, 15 Aug 2022 15:28:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345067AbiHOT1o (ORCPT ); Mon, 15 Aug 2022 15:27:44 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1C715B7BA; Mon, 15 Aug 2022 11:43:32 -0700 (PDT) Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27FIMqc8011181; Mon, 15 Aug 2022 18:43:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=qcppdkim1; bh=o+kWYs+yHIy8cJ5iha1yG+ss0RBqFGfmFO2EG/z78kI=; b=YzTCk5ku3uS01TIBJHL3qfgMV0OF5s1/eXFwVlAW4tNZ1r0hZ9fcj1aaTqf6ccJ7MQjZ xxoeqEL2YNRbvQZM8EDPqP7bsrUp7DGeonrMKR/kPayaEaO6wixXqhsuGbM0ONqDONsg dR1anINkN4mPsa5LPNpZSpIpCUDYqRe7fOnApqcxj+b+kOOBlT08g4QPQrRYUeErgEho 8qWCEUBcawgFz1GTWq/86359MNQuHIGmdbOZ2tVvn59wGw3gzqmsLsJMQhWJ5ULZcvub QmwOmv/vzqjgvOSQ0Ckc8VT/Cctn1Y7X3RJFPNUaN79Blblj6Y+VpxyxLm8VaAqRCDX+ nQ== Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3hx39re1vc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:26 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 27FIhPXj032220 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 18:43:25 GMT Received: from jhugo-lnx.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 15 Aug 2022 11:43:23 -0700 From: Jeffrey Hugo To: , , , , CC: , , , , , , Jeffrey Hugo Subject: [RFC PATCH 14/14] MAINTAINERS: Add entry for QAIC driver Date: Mon, 15 Aug 2022 12:42:36 -0600 Message-ID: <1660588956-24027-15-git-send-email-quic_jhugo@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> References: <1660588956-24027-1-git-send-email-quic_jhugo@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: eLtFVYPA1KkZOT3NJpgLgaV1Y-huBgCh X-Proofpoint-GUID: eLtFVYPA1KkZOT3NJpgLgaV1Y-huBgCh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-15_08,2022-08-15_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 priorityscore=1501 mlxscore=0 lowpriorityscore=0 bulkscore=0 adultscore=0 spamscore=0 malwarescore=0 phishscore=0 mlxlogscore=936 suspectscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2207270000 definitions=main-2208150070 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add MAINTAINERS entry for the Qualcomm Cloud AI 100 driver. Change-Id: I149dbe34f1dbaeeca449b4ebf97f274c7484ed27 Signed-off-by: Jeffrey Hugo --- MAINTAINERS | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index cd0f68d..695654c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -15962,6 +15962,13 @@ F: Documentation/devicetree/bindings/clock/qcom,* F: drivers/clk/qcom/ F: include/dt-bindings/clock/qcom,* +QUALCOMM CLOUD AI (QAIC) DRIVER +M: Jeffrey Hugo +L: linux-arm-msm@vger.kernel.org +S: Supported +F: drivers/gpu/drm/qaic/ +F: include/uapi/drm/qaic_drm.h + QUALCOMM CORE POWER REDUCTION (CPR) AVS DRIVER M: Niklas Cassel L: linux-pm@vger.kernel.org