From patchwork Mon Nov 6 13:35:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 118040 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp2773972qgn; Mon, 6 Nov 2017 05:36:08 -0800 (PST) X-Google-Smtp-Source: ABhQp+Rf3eG3Qm8D9MpRiV9O0CY/joqKiXlJHcZPbXfmmLZ/KBUdS1quoN01TOZhK4AY1f+Hgpu2 X-Received: by 10.84.142.131 with SMTP id 3mr14521840plx.26.1509975368623; Mon, 06 Nov 2017 05:36:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1509975368; cv=none; d=google.com; s=arc-20160816; b=ZOqKigtD0kbsKcaHshH/BhrLGEi0un3k9hO7QrY3cV25Y61YvJi2ejiNPfDP5o5vFn SYwooyvPqy8S+NjPCkbcrwHLJLvsbCT31gKFSxeb9ix+0vu8Gg5cW8hLDW7QyyXVxK/4 gaeSPrJlgpePQMKyuAqiULKLNzz5P47grK+kkxmdiZKllB7eN4vs02pZfpgmUwmObBsH OpUlWlnHkPOQuUkBmeeEbbmFXw/PIe9UFFo/YEAqRoBARpOdP6RioIFjcYJ0Q6FeUbUz 4Of1A3zE2ZTu7jitlNaZjezLWfCTnFIjmRJO9ZYKezCB6UICKTf2fXPPoBFkHp5tOSn2 bQGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=3KnF082fwDayxJNrm8Y4Ydbssi2AAE0g17Db/YYmw/U=; b=Oe7GaID57FSOcPT3dU5ueURghe5RdQ8D1xlEfAiWlmnxuliMtow/b11J62NJGL3QrR 6XiAuIZFsZ2E0hADC8j727r1MfI2JHb7/qSYQH8fBnVAPD6OGVQ+cgLvaD/LNqJQ1Zt6 xzv6KzL5TsLxxWNVhG1zmewLdoeI02AG800hwkl0cVXzev7oH0iev4VMyGd7Nfwdl6Z5 Uq/P1x8sDGnLxnrzayjNSG9WVvoqwQsPnBirFKAOBbShQkToKSVBVL2iUc5IB16Q5kP6 DK0kI1HUE9armsnoiJnbarQ/5iqY/rokiU7AGTJxQj/ffyDp6MCp2otwx8dGbIsK7jSM 7Buw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1si10090219plz.723.2017.11.06.05.36.08; Mon, 06 Nov 2017 05:36:08 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753127AbdKFNgG (ORCPT + 26 others); Mon, 6 Nov 2017 08:36:06 -0500 Received: from mout.kundenserver.de ([217.72.192.73]:59076 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752632AbdKFNgE (ORCPT ); Mon, 6 Nov 2017 08:36:04 -0500 Received: from wuerfel.lan ([109.193.157.232]) by mrelayeu.kundenserver.de (mreue104 [212.227.15.145]) with ESMTPA (Nemesis) id 0M9GoA-1eOK4P3mhS-00Cieh; Mon, 06 Nov 2017 14:35:44 +0100 From: Arnd Bergmann To: Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , "James E.J. Bottomley" , "Martin K. Petersen" Cc: Arnd Bergmann , Tomas Henzl , Sreekanth Reddy , Hannes Reinecke , Romain Perier , James Bottomley , Bart Van Assche , MPT-FusionLinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] mpt3sas: fix dma_addr_t casts Date: Mon, 6 Nov 2017 14:35:16 +0100 Message-Id: <20171106133540.506572-1-arnd@arndb.de> X-Mailer: git-send-email 2.9.0 X-Provags-ID: V03:K0:0fjyieOtgtK4o+F1V9DkzQ5l385gcYPevxpyWFMVWhWHTpshcKk hozB0GplrKfAoclAU2CxanUfIfEILjBoUz0BY0vnZAIUtC6jMnZisqXeMHuadxcEG42lute NKJ5uIzwD1BeTWCHIv8FyynIQe3b5a3umCqJLT9FJayl++hD8t/QBXKOQUgKwRLApl/mBi+ juWmt39Abl8osVaRRM8Gg== X-UI-Out-Filterresults: notjunk:1; V01:K0:HfK+6/a2Ea8=:BHRj57FZZj4zuOIzWByoo1 y2r7qYU41BebL0BGmu1AdwBmiYnsikVjMNSE/4J16f3nN0J8qIBUmQ8RJ0xNNFJ8rkma6Z2xu 0QzxUoMOVIkhQCNvByy0Gg77zRlpLbCTmsU1hSvGkSydOGfe/Y+z1eIsQmAjqXwYBC9KiseNZ iDDI53D0CBR1Db3F33hxrx3HOf2E0U3foD/W/oypNnAOMrea/ZhvFF/6D8mTnhmeGqdG/1tEC mvUGVcbYPJu6taXnwHQJUVM7HX8KJiTt15RDlJdizmyeFgcAXAIMFLOmDNh6S3SCwrxpzfv1Q g+vUf9bOnh4iFoB75suOlH6mKghW716UvglAj61Aa9Jz1I2mNQW+kE74zOhwOudZAeSMubH3L lQTF/AquqgvfxqWCZO3mHGGprTrjeWe8Yz2OI8rsopokAl1rdj2imZvxesGD5G6VUv1VDovNY Po6oBgZ/oQskCMEWsEU+gH1HDgfJ4f/72jH65XynWEkS3Gxi0Nn7cgy2dPhYcuZY+LnL5WpHL x7XE2Iq6+2nPSloxQBDdUi+3KqjxsWkVmWbfG3oNDLejknF3DotQWeX4Vi8VF1JhVMfQjgNcb 0ebd3p1UTyo0yz7a26ICPxSTzswX5IpIUd3qNcyi+PL9USsbmqcXHo/N9SutKNyv6RcEc4fle MCqixMSooHk0/65conaeRLi6tuco1t2xRn7jhqeDzPvlIbq6KB4HpQl3iey911WiT0ZYa0RN8 M+JGYsd3CAb8IX2JMRH6e+ChU7OicVGGR08sTQ== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The newly added base_make_prp_nvme function triggers a build warning on some 32-bit configurations: drivers/scsi/mpt3sas/mpt3sas_base.c: In function 'base_make_prp_nvme': drivers/scsi/mpt3sas/mpt3sas_base.c:1664:13: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast] msg_phys = (dma_addr_t)mpt3sas_base_get_pcie_sgl_dma(ioc, smid); After taking a closer look, I found that the problem is that the new code mixes up pointers and dma_addr_t values unnecessarily. This changes it to use the correct types consistently, which lets us get rid of a lot of type casts in the process. I'm also renaming some variables to avoid confusion between physical and dma address spaces that are often distinct. Fixes: 016d5c35e278 ("scsi: mpt3sas: SGL to PRP Translation for I/Os to NVMe devices") Signed-off-by: Arnd Bergmann --- drivers/scsi/mpt3sas/mpt3sas_base.c | 62 +++++++++++++++++-------------------- drivers/scsi/mpt3sas/mpt3sas_base.h | 2 +- 2 files changed, 30 insertions(+), 34 deletions(-) -- 2.9.0 diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c index 7a3f4d14f260..8027de465d47 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_base.c +++ b/drivers/scsi/mpt3sas/mpt3sas_base.c @@ -1437,11 +1437,11 @@ _base_build_nvme_prp(struct MPT3SAS_ADAPTER *ioc, u16 smid, size_t data_in_sz) { int prp_size = NVME_PRP_SIZE; - __le64 *prp_entry, *prp1_entry, *prp2_entry, *prp_entry_phys; - __le64 *prp_page, *prp_page_phys; + __le64 *prp_entry, *prp1_entry, *prp2_entry; + __le64 *prp_page; + dma_addr_t prp_entry_dma, prp_page_dma, dma_addr; u32 offset, entry_len; u32 page_mask_result, page_mask; - dma_addr_t paddr; size_t length; /* @@ -1465,7 +1465,7 @@ _base_build_nvme_prp(struct MPT3SAS_ADAPTER *ioc, u16 smid, * contiguous memory. */ prp_page = (__le64 *)mpt3sas_base_get_pcie_sgl(ioc, smid); - prp_page_phys = (__le64 *)mpt3sas_base_get_pcie_sgl_dma(ioc, smid); + prp_page_dma = mpt3sas_base_get_pcie_sgl_dma(ioc, smid); /* * Check if we are within 1 entry of a page boundary we don't @@ -1476,21 +1476,21 @@ _base_build_nvme_prp(struct MPT3SAS_ADAPTER *ioc, u16 smid, if (!page_mask_result) { /* Bump up to next page boundary. */ prp_page = (__le64 *)((u8 *)prp_page + prp_size); - prp_page_phys = (__le64 *)((u8 *)prp_page_phys + prp_size); + prp_page_dma = prp_page_dma + prp_size; } /* * Set PRP physical pointer, which initially points to the current PRP * DMA memory page. */ - prp_entry_phys = prp_page_phys; + prp_entry_dma = prp_page_dma; /* Get physical address and length of the data buffer. */ if (data_in_sz) { - paddr = data_in_dma; + dma_addr = data_in_dma; length = data_in_sz; } else { - paddr = data_out_dma; + dma_addr = data_out_dma; length = data_out_sz; } @@ -1500,8 +1500,7 @@ _base_build_nvme_prp(struct MPT3SAS_ADAPTER *ioc, u16 smid, * Check if we need to put a list pointer here if we are at * page boundary - prp_size (8 bytes). */ - page_mask_result = - (uintptr_t)((u8 *)prp_entry_phys + prp_size) & page_mask; + page_mask_result = (prp_entry_dma + prp_size) & page_mask; if (!page_mask_result) { /* * This is the last entry in a PRP List, so we need to @@ -1515,13 +1514,13 @@ _base_build_nvme_prp(struct MPT3SAS_ADAPTER *ioc, u16 smid, * contiguous, no need to get a new page - it's * just the next address. */ - prp_entry_phys++; - *prp_entry = cpu_to_le64((uintptr_t)prp_entry_phys); + prp_entry_dma++; + *prp_entry = cpu_to_le64(prp_entry_dma); prp_entry++; } /* Need to handle if entry will be part of a page. */ - offset = (u32)paddr & page_mask; + offset = dma_addr & page_mask; entry_len = ioc->page_size - offset; if (prp_entry == prp1_entry) { @@ -1529,7 +1528,7 @@ _base_build_nvme_prp(struct MPT3SAS_ADAPTER *ioc, u16 smid, * Must fill in the first PRP pointer (PRP1) before * moving on. */ - *prp1_entry = cpu_to_le64((u64)paddr); + *prp1_entry = cpu_to_le64(dma_addr); /* * Now point to the second PRP entry within the @@ -1549,8 +1548,7 @@ _base_build_nvme_prp(struct MPT3SAS_ADAPTER *ioc, u16 smid, * list will start at the beginning of the * contiguous buffer. */ - *prp2_entry = - cpu_to_le64((uintptr_t)prp_entry_phys); + *prp2_entry = cpu_to_le64(prp_entry_dma); /* * The next PRP Entry will be the start of the @@ -1562,7 +1560,7 @@ _base_build_nvme_prp(struct MPT3SAS_ADAPTER *ioc, u16 smid, * After this, the PRP Entries are complete. * This command uses 2 PRP's and no PRP list. */ - *prp2_entry = cpu_to_le64((u64)paddr); + *prp2_entry = cpu_to_le64(dma_addr); } } else { /* @@ -1572,16 +1570,16 @@ _base_build_nvme_prp(struct MPT3SAS_ADAPTER *ioc, u16 smid, * all remaining PRP entries in a PRP List, one per * each time through the loop. */ - *prp_entry = cpu_to_le64((u64)paddr); + *prp_entry = cpu_to_le64(dma_addr); prp_entry++; - prp_entry_phys++; + prp_entry_dma++; } /* * Bump the phys address of the command's data buffer by the * entry_len. */ - paddr += entry_len; + dma_addr += entry_len; /* Decrement length accounting for last partial page. */ if (entry_len > length) @@ -1610,11 +1608,10 @@ base_make_prp_nvme(struct MPT3SAS_ADAPTER *ioc, Mpi25SCSIIORequest_t *mpi_request, u16 smid, int sge_count) { - int sge_len, offset, num_prp_in_chain = 0; + int sge_len, num_prp_in_chain = 0; Mpi25IeeeSgeChain64_t *main_chain_element, *ptr_first_sgl; __le64 *curr_buff; - dma_addr_t msg_phys; - u64 sge_addr; + dma_addr_t msg_dma, sge_addr, offset; u32 page_mask, page_mask_result; struct scatterlist *sg_scmd; u32 first_prp_len; @@ -1661,9 +1658,9 @@ base_make_prp_nvme(struct MPT3SAS_ADAPTER *ioc, * page (4k). */ curr_buff = mpt3sas_base_get_pcie_sgl(ioc, smid); - msg_phys = (dma_addr_t)mpt3sas_base_get_pcie_sgl_dma(ioc, smid); + msg_dma = mpt3sas_base_get_pcie_sgl_dma(ioc, smid); - main_chain_element->Address = cpu_to_le64(msg_phys); + main_chain_element->Address = cpu_to_le64(msg_dma); main_chain_element->NextChainOffset = 0; main_chain_element->Flags = MPI2_IEEE_SGE_FLAGS_CHAIN_ELEMENT | MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR | @@ -1675,7 +1672,7 @@ base_make_prp_nvme(struct MPT3SAS_ADAPTER *ioc, sge_addr = sg_dma_address(sg_scmd); sge_len = sg_dma_len(sg_scmd); - offset = (u32)(sge_addr & page_mask); + offset = sge_addr & page_mask; first_prp_len = nvme_pg_size - offset; ptr_first_sgl->Address = cpu_to_le64(sge_addr); @@ -1693,7 +1690,7 @@ base_make_prp_nvme(struct MPT3SAS_ADAPTER *ioc, } for (;;) { - offset = (u32)(sge_addr & page_mask); + offset = sge_addr & page_mask; /* Put PRP pointer due to page boundary*/ page_mask_result = (uintptr_t)(curr_buff + 1) & page_mask; @@ -1701,15 +1698,15 @@ base_make_prp_nvme(struct MPT3SAS_ADAPTER *ioc, scmd_printk(KERN_NOTICE, scmd, "page boundary curr_buff: 0x%p\n", curr_buff); - msg_phys += 8; - *curr_buff = cpu_to_le64(msg_phys); + msg_dma += 8; + *curr_buff = cpu_to_le64(msg_dma); curr_buff++; num_prp_in_chain++; } *curr_buff = cpu_to_le64(sge_addr); curr_buff++; - msg_phys += 8; + msg_dma += 8; num_prp_in_chain++; sge_addr += nvme_pg_size; @@ -2755,11 +2752,10 @@ mpt3sas_base_get_pcie_sgl(struct MPT3SAS_ADAPTER *ioc, u16 smid) * * Returns phys pointer to the address of the PCIe buffer. */ -void * +dma_addr_t mpt3sas_base_get_pcie_sgl_dma(struct MPT3SAS_ADAPTER *ioc, u16 smid) { - return (void *)(uintptr_t) - (ioc->scsi_lookup[smid - 1].pcie_sg_list.pcie_sgl_dma); + return ioc->scsi_lookup[smid - 1].pcie_sg_list.pcie_sgl_dma; } /** diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h b/drivers/scsi/mpt3sas/mpt3sas_base.h index 2d7d44281cb7..60f42ca3954f 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_base.h +++ b/drivers/scsi/mpt3sas/mpt3sas_base.h @@ -1395,7 +1395,7 @@ void *mpt3sas_base_get_sense_buffer(struct MPT3SAS_ADAPTER *ioc, u16 smid); __le32 mpt3sas_base_get_sense_buffer_dma(struct MPT3SAS_ADAPTER *ioc, u16 smid); void *mpt3sas_base_get_pcie_sgl(struct MPT3SAS_ADAPTER *ioc, u16 smid); -void *mpt3sas_base_get_pcie_sgl_dma(struct MPT3SAS_ADAPTER *ioc, u16 smid); +dma_addr_t mpt3sas_base_get_pcie_sgl_dma(struct MPT3SAS_ADAPTER *ioc, u16 smid); void mpt3sas_base_sync_reply_irqs(struct MPT3SAS_ADAPTER *ioc); /* hi-priority queue */