From patchwork Mon Sep 19 05:52:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Suthikulpanit, Suravee" X-Patchwork-Id: 76502 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp794740qgf; Sun, 18 Sep 2016 22:56:09 -0700 (PDT) X-Received: by 10.107.137.170 with SMTP id t42mr7508697ioi.25.1474264569851; Sun, 18 Sep 2016 22:56:09 -0700 (PDT) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org. [192.237.175.120]) by mx.google.com with ESMTPS id s142si23869551ita.68.2016.09.18.22.56.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 18 Sep 2016 22:56:09 -0700 (PDT) Received-SPF: neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) client-ip=192.237.175.120; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@amdcloud.onmicrosoft.com; spf=neutral (google.com: 192.237.175.120 is neither permitted nor denied by best guess record for domain of xen-devel-bounces@lists.xen.org) smtp.mailfrom=xen-devel-bounces@lists.xen.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1blrW4-0004F3-Rw; Mon, 19 Sep 2016 05:53:20 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1blrW3-0004Ef-NF for xen-devel@lists.xen.org; Mon, 19 Sep 2016 05:53:19 +0000 Received: from [193.109.254.147] by server-4.bemta-6.messagelabs.com id D5/E7-29421-F4D7FD75; Mon, 19 Sep 2016 05:53:19 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA1WSXUhTYRzGfc/OtqNu8ToV/52m5MAIZaYgThG jL7QLDa+yMrCzPLnFNuWcGYoJQ/zAjwvB8mMMslIUtdAuYtBFuGymMomkNI2MSlxGqVH5FdE5 O1p2rp7z/p7nfd4//CmZZlVOU2y5neVsjEWnCCFNR3Rn9WeqFvKTnn9RpnWv3SWPodNPH20Te eiC3GwzlpRfkps8L1NL1/LKh+f65A7UfrQRhVAkrpHB5nw/akTBFMKF8HWhSy4CDXYQ0DaxQY hAgTNg2fVGKeoIHA0DjS6laJLhegQtmzNCmqLCcSY4+2JFD4nj4HerUy5qNc6Bsa3PpKgBx8C 4tzVwHoxzYem9LxDVCJ6loQTJHgbjnR8DdhkGGPH7ZVJUB50PHQqxFnAdAvfgnYBJg+NhenRE IZlyYWt9kpBMvQroqN9QSiALprzPdkCPAnoXPyAJWMDjmCQknQZNt38hyeRBsOpz7yR+yGB2r XpnBi00jDYrJVArB7e/SSkNUQhbrpPSebcK+n4ukdLPvBLavg0H+sIxDSvu8R2thaEOF9mC4p x7JnfumbwLEf3oMM9y11hOn5ySaOTMxSa7lTFb9MlJqYlWlueZYtbCGPnEyyXWB0jYhCDhc6O Z4TwP2k8Ruki12/42X7PPWFJUYWJ4UyFXZmF5D9JSlA7U2usL+Zowji1my6+YLcI67WKgVLoI 9UilgNV8KWPlzcUSmkCxdJQ6QsxhEZjKbH9ju4v4AkXT4WokPESjKmU5q9n+P19GURTShauV4 i0qs83+9/ZloZgQigec82KxnfmHaAeqMWxO2DMLZlOyFRPuHL8efEOvt580NURnvOrqDOo6Hn 1uJYNONhT42995KwmfMfRQTup0xr3ByMzH1Kkq2ni+KHOhOSLBu55OG2r7J2e0PQbs7a9wkFO LoZ9uHKy2pX+/7zpRdzV9juigFpnsmUVb1oFbMvtaT8zFmJtjIzqSNzHJ8TKOZ/4A5VAlaoMD AAA= X-Env-Sender: Suravee.Suthikulpanit@amd.com X-Msg-Ref: server-4.tower-27.messagelabs.com!1474264395!59047536!2 X-Originating-IP: [104.47.34.86] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 7355 invoked from network); 19 Sep 2016 05:53:17 -0000 Received: from mail-by2nam01on0086.outbound.protection.outlook.com (HELO NAM01-BY2-obe.outbound.protection.outlook.com) (104.47.34.86) by server-4.tower-27.messagelabs.com with AES256-SHA256 encrypted SMTP; 19 Sep 2016 05:53:17 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=tH9fNp/LJAXBamlBToFJCajUZPIe+IQiBRHoLWrjyxI=; b=OnMIxbziH7DkRrfnnM9lRKdLWYukMqb3kMjI9mVWFfbJoeyYFewz8YEBUE8aCQM6S6uZkJ1yrubh1Ok2E2A9oLCvPM+weZMHFaguMkFTobor3ruYx3DScc02KVmkED8ZYjEQVWeN6dxHcZ0Mvuf1IprZ6GBKgsz/GUDkf+zQd9I= Received: from localhost.localdomain (114.109.128.54) by BN6PR12MB1442.namprd12.prod.outlook.com (10.172.24.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.629.8; Mon, 19 Sep 2016 05:53:14 +0000 From: Suravee Suthikulpanit To: Date: Mon, 19 Sep 2016 00:52:40 -0500 Message-ID: <1474264368-4104-2-git-send-email-suravee.suthikulpanit@amd.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1474264368-4104-1-git-send-email-suravee.suthikulpanit@amd.com> References: <1474264368-4104-1-git-send-email-suravee.suthikulpanit@amd.com> MIME-Version: 1.0 X-Originating-IP: [114.109.128.54] X-ClientProxiedBy: HK2PR02CA0002.apcprd02.prod.outlook.com (10.163.104.140) To BN6PR12MB1442.namprd12.prod.outlook.com (10.172.24.19) X-MS-Office365-Filtering-Correlation-Id: 8aaeaf41-68c1-4d6c-cb6e-08d3e0514072 X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1442; 2:gwgOfgnuHdRZaLyNKQLJAlr7c8pGjruJ0+eTgaqxTtOSGYEAmjPUzv87rBD0pmgF+blxXHpxNkZBimo0Tc3M30Pqdp8XnKe6VgiEDz0Q4ch8JM9bk+u+CWJGIPGfbq7tjgRqr+SabWTmoeR0lWh7Ttz2tkC+RSvjEz6EoAA8d7frzvgV7JXrbTOuWiNTvhFe; 3:55yMGP9gF+Nc5DAp/ASMlfTiISSJQUljFTFhygM5LgAqx45CgOSyirtCthZlqOIE3mYgyS+8pd0GxCsLxmUyfCiKbr/tgjuDVYzZPxETrzEtHzp+8HSaC8zrYhDOFiTF; 25:JVcbbT1nO5/1dVgD/h/dMvwLM2s8WPvS9/nay/EDHqIGswIHUfDVmPjYONR5Ti1EYozCjh4YBUKhPqeeuER5Zzhlhxr4xUc2DW0XkdO06GX5+7Sb2T7ZJ2V6/l0J0VBzapS17ZzCCpNg9j65Enwp1M6bZZ4NxMvjRJQXR4+f82u4/KMC7j+eQfuwb84ztAgyHkwdFaZxQ7Rrrv6eQQUJ2dKgBLg7TaETyqLIKuQVTTbuj0TV+5S0A4gU9d2VPTuFRa5OvllOJB8yAdkAsLRckjk+ro7gUUfVzXGhc6YStJk6E1J53N8O3CqxdpRn/geVq3Ea0O2H7ZNyb5Hs881ktbuXuQZBBJ8xDVDEio2fIbZ5AZG2awn83SodfoBWYchOvNC1Ip0z6AHUnXn/h4ae0KzFCb4O+na6AGWuToCPeTc= X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BN6PR12MB1442; X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1442; 31:hGCdFCEGLxyf5O8T4pg5kXIkY+1NuzLxIuy1ir4Gj7epaTQUFzvhaK7ybQQbUdP4f1pJMuFRa/51Wl//LFH4sZ9h28JT/ZKjAp4kqqkojqZiCiHSxR/uzikJVw/QnCS5yeLtVQ9WAzDb9LHbK9LIt0YMkk59IRbj0VcXHP1ln4UWzCy/km47bHcnW+awbneMcas6IlbLNV0vjSUbasl6IB8YRs6bvaiemk34EYMsV/o=; 20:7mU93VEF1M3KWe6r9kV1l2V+tJ3ly+x2bRu+ORmOs+qGXOAJ74I+IBbk6HLs1YIA3Ti8o6/ylb9xmIrwdPl3D96O6cuT3n01IbfxAGDuP3wCj+6Ye9mbDqeGDFuzS3I2ak6Mo1EYZRbHfX3/xCRe6EdyiCRPN8aN4SiBkGCwdFrlXpzoU7aXpse1hEOsz2b6ZrcVWI5mrjAc4xFZRz2irYo0jBsRm9qkjvnXIa1r7yPgw4RZCraPPj30x0RPtpeCGJjAyTuNl8fGtkwGPzkiSmVsNa4wIy9wQMaH3SvQU1FIme4CB26uFcEIN3YTRXHquiQwyhmKhWytrfNbPH/rRwfB5CNqKk3Z7M0LLMXuWSlCYS1V/cVyIQgPasj5U2G6z5I1fRtfbLLtxdQQcgGHTzaOYE9cqXJUrSTpTTwelI7cqonE5XzQq6HcE7kNz2PcYl4J7voQq4/1AGtViX2adcro/dhViheTE7UrR3rLtc+VWfQD687T/QOTPtTe2tqQ X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040176)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6055026); SRVR:BN6PR12MB1442; BCL:0; PCL:0; RULEID:; SRVR:BN6PR12MB1442; X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1442; 4:+yhPe/TmuI77ySDwqg5h2JlhEEyRviE0d9mG2blEE7D+SXEnJHfy17sjuh6X3UxVl7gvRX97FgJ0gqXuZS+13dXM5vxgjd8PhQVpyYoZAAYmINiA8xq0HBsXbyhm6JlzYUSNKqcHNfSY3Lbb+20XbCDqyxOP/EpFZpKmnQl507xzAFxD887eMUnjLm7vXdbeoxnUHGD0BIPT4raXvWeYRoL9Hu8LHZZFu8hKDgF/1+6rn9Me0qCP/idPMg7/ormZzboPOutIqB4UPGsmhNjW4uXh4xnPvCmlI1KjBMFxJyQxlJkktt44aRbBbzUBHCVflO1ZVmmeIb/8Vcq45poaIhnrwkjK+5KDrmBOsl/7UPe/+uaPqumTvjuAt1T5z7REJQDMkdURUV4SmF8v02qWOEqPEEFY4IEMgfV+u8JMtf0WxIG4Nt0XB4jNb4lblPMc X-Forefront-PRVS: 0070A8666B X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6069001)(6009001)(7916002)(199003)(189002)(86362001)(4326007)(6116002)(3846002)(42186005)(101416001)(586003)(2906002)(68736007)(189998001)(50226002)(50466002)(47776003)(97736004)(66066001)(48376002)(50986999)(76176999)(36756003)(5660300001)(81166006)(81156014)(2351001)(229853001)(110136003)(92566002)(2950100001)(105586002)(106356001)(19580405001)(7846002)(7736002)(19580395003)(5003940100001)(33646002)(8666005)(8676002)(305945005)(77096005)(7059030)(217873001); DIR:OUT; SFP:1101; SCL:1; SRVR:BN6PR12MB1442; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN6PR12MB1442; 23:joMMF5QnL1X+WxV49cmFxEAnZs1CD/seAu1TT2Dip?= =?us-ascii?Q?l6Q/ZfFi7pHvjIuoPGVIyrLkuWMefzWe76SaNlw5VrltT4VyMPWwVopbiKmu?= =?us-ascii?Q?zKWWax9CMlk4R18im+XUNA8DJW712FMXtE9svAmstf1J0t4Iffpz49oyRwJc?= =?us-ascii?Q?vnfDbs2mNxTNCgXsWo00EYhVJjlTwdz/428u+nclACtYF55R8qHKnzmsbZHy?= =?us-ascii?Q?R08rXErqVA5uWVpbo5H/WI18Ws9IWoKIwv8c7piHlLiwEjVFmMaTSsWKraXL?= =?us-ascii?Q?TwDOGBNG/HG6XmFEeej+nunx5jFw5yLvtWkuBkxwCeJd89zljgUjl8pnzAZf?= =?us-ascii?Q?I0QkBt70YqaQc9SPY68XvJevzVUGgOtK1laamEgsS37RKqdg/zLWJfRuGlQ1?= =?us-ascii?Q?krCpwpLVbp7rDCBuGhsg2wfBasawdXO7lg/frSXoZ2Pj9EwiWPSd8SPyHhQU?= =?us-ascii?Q?2ej1eSNW3InLXdU+ApHP3lGyeD2ZgJjqqJIo8huddRgsNplX692lVVSFkLjl?= =?us-ascii?Q?a0qIAJtZ0LTFVOvFUZIgb+ehdLdOlF4sv3Jjdl9HkV+BDezkF8eAxhbvrGHh?= =?us-ascii?Q?T/YKFI2vdy6p/ghSihSxbrCIp/r+u6LlUUkkZ8t6J0GgVzq59t1LmGZM9Ta7?= =?us-ascii?Q?Gf7ViZXr1BrbzcFXzLHXuuv2mWj8qhtLATJ+8WB73J7+XOOmHkMM29wZXG91?= =?us-ascii?Q?9JeHy6GnAvIDLua0Mepgi9g6VuOxPD8e101MO13K6Onkp3t1O0YFixdZQHBH?= =?us-ascii?Q?+F9Tsr4GmrrgSqO3I93GvLha5+6LfoQ/AUjJ7torZBpxUxI+mgK5lDPpeyeN?= =?us-ascii?Q?PFb+il4ZsZ4WDBl+90rH0wpEwIllMixLKLuMw39ML6SXzqYHQTUog1xsOz+t?= =?us-ascii?Q?RsdhZ0uemBTH705YYLDkDJhnwyKiLOvKpfgA3aNVDjkTIGARWwyVmJBfQsAw?= =?us-ascii?Q?DlEbMmcaYpOwMxagq7q8KoormzIDsBUxx98Q3AB13KA5yTEIIk3FXJ3RxXR4?= =?us-ascii?Q?Hhyzn9tuw8W2bGAwnayMlbqZYGUrbS0MW3tRpIntVHiBIneAH0PYJvUiOpSk?= =?us-ascii?Q?IpdySJ5OHPPZd8ZjOe+lLRJ24M9Fsz5yH2gUgTIV1PXGJOc/gBKo2qvMG7Ix?= =?us-ascii?Q?TtovLgz/sD1zO97Yp49GWC/VR7NXUqibOBW9EUQCpiOxyPLyEgS2w=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1442; 6:YjEGhtHMZ1h3XnfWDe8cTBty5PHjk09v6MpwWLkqX+dr/UORHqRnqrniemEuO75/pGXOq9A9+m8CNkGSHvij/44EW7cXs4jXbEe+etn84dVPVENitviKI+JkevuYcIO1yFRLADb1r7N6j0703O+8raJpmmwI7ms6Xnb6fWiFoXkYft29NCLYQbfDp2mqAVpjD8wLSkZavglOy1f8z1RPKA/VeQ0pfQNm2z6bHq1UNNPfxm0qmV4Mu2kegnWjiIUFcCZLVVGYo+GI70J4WH+7q0aQR82W50MWxNbLYnQkVFlcrJBgdh3OvumB+2tLQc4+1DKqkgqI3SGjg+IL0z6IUQ==; 5:D1ojRLJkGN4MdFWVtJYJojKkhuSMap4BKpugnbuDG/oOlCDurdFyaGfWl8M66nbs+QQXWp5qUpeubddoyxXt/IRpTzYx+YROsth9S5+UR08dwNP88nGh5qWJbJJWDPjLvzTWcjImi5pTN8Z4i92brw==; 24:rSVP1d4kG7+kgTHJyl766W48G88EZOaxbFfA3jEMoc32A6/noGP80ihd2uUfA0cLfKuQkfiWiD8UFY5+Dp6Kzb5eGshHu2nf6nmMd/T1GEI=; 7:Mi9VuSPN5842SSEmV8yMzhFYtJi5x5HwvnKoqq+SIy9vyNjvwBNqKfRx/sw4/d3eB3EBIadeQu06ov59ypQmTb6xj1yH8irsiY1/rhyp9kliCxaWX/AwfhyAT/eTOK+zc4fim/qCTgWPvGjP1gLe4zakVoM1XRSkVWEwKR7/wSE4RwBFJjy80R+sNl4Mi7alRXbBAwnryUf1wAJQHRXvvxM7Jdq60AU+gCiST3qi8KCQSfgqDWaW6QnxRGCSsJPs8iM5WPuUB6+jXNf6s0Q4omrb1CVTZgw2gVfR97b0L6wJyh8BzfwS4m68ikYWj6eb SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1442; 20:47aeKsYqs42WiocVmOi56Hxt5Z4bl26TH2f/Tn67BkpYmg30ZzaH3GQ+JJRttlVoXIziSZgm0WD1h+Vicg5jufca7nUKMCtcUWwpIwmvSN+Q20Tgk4OiLTJVHVamdT46log+YaNEfrwlfTYaWefNDxiIceajqxRh457B/FyaCUOeNVDU/5sPUWG77xZVD8gLkrZBbE2YvuIdqmfEV1ulP/MrWfKvSinLuaHjbGk3p8kuTy4b8aPEm/jFrWcOQned X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Sep 2016 05:53:14.5771 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1442 Cc: andrew.cooper3@citrix.com, Suravee Suthikulpanit , jbeulich@suse.com, sherry.hurwitz@amd.com Subject: [Xen-devel] [RFC PATCH 1/9] x86/HVM: Introduce struct hvm_pi_ops X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" The current function pointers for managing hvm posted interrupt can be used also by SVM AVIC. Therefore, this patch introduces the struct hvm_pi_ops in the struct hvm_domain to hold them. Signed-off-by: Suravee Suthikulpanit --- xen/arch/x86/hvm/vmx/vmx.c | 32 +++++++++---------- xen/include/asm-x86/hvm/domain.h | 63 ++++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/hvm/hvm.h | 4 +-- xen/include/asm-x86/hvm/vmx/vmcs.h | 59 ----------------------------------- 4 files changed, 81 insertions(+), 77 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index 2759e6f..8620697 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -204,12 +204,12 @@ void vmx_pi_hooks_assign(struct domain *d) if ( !iommu_intpost || !has_hvm_container_domain(d) ) return; - ASSERT(!d->arch.hvm_domain.vmx.vcpu_block); + ASSERT(!d->arch.hvm_domain.pi_ops.vcpu_block); - d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block; - d->arch.hvm_domain.vmx.pi_switch_from = vmx_pi_switch_from; - d->arch.hvm_domain.vmx.pi_switch_to = vmx_pi_switch_to; - d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume; + d->arch.hvm_domain.pi_ops.vcpu_block = vmx_vcpu_block; + d->arch.hvm_domain.pi_ops.pi_switch_from = vmx_pi_switch_from; + d->arch.hvm_domain.pi_ops.pi_switch_to = vmx_pi_switch_to; + d->arch.hvm_domain.pi_ops.pi_do_resume = vmx_pi_do_resume; } /* This function is called when pcidevs_lock is held */ @@ -218,12 +218,12 @@ void vmx_pi_hooks_deassign(struct domain *d) if ( !iommu_intpost || !has_hvm_container_domain(d) ) return; - ASSERT(d->arch.hvm_domain.vmx.vcpu_block); + ASSERT(d->arch.hvm_domain.pi_ops.vcpu_block); - d->arch.hvm_domain.vmx.vcpu_block = NULL; - d->arch.hvm_domain.vmx.pi_switch_from = NULL; - d->arch.hvm_domain.vmx.pi_switch_to = NULL; - d->arch.hvm_domain.vmx.pi_do_resume = NULL; + d->arch.hvm_domain.pi_ops.vcpu_block = NULL; + d->arch.hvm_domain.pi_ops.pi_switch_from = NULL; + d->arch.hvm_domain.pi_ops.pi_switch_to = NULL; + d->arch.hvm_domain.pi_ops.pi_do_resume = NULL; } static int vmx_domain_initialise(struct domain *d) @@ -901,8 +901,8 @@ static void vmx_ctxt_switch_from(struct vcpu *v) vmx_restore_host_msrs(); vmx_save_dr(v); - if ( v->domain->arch.hvm_domain.vmx.pi_switch_from ) - v->domain->arch.hvm_domain.vmx.pi_switch_from(v); + if ( v->domain->arch.hvm_domain.pi_ops.pi_switch_from ) + v->domain->arch.hvm_domain.pi_ops.pi_switch_from(v); } static void vmx_ctxt_switch_to(struct vcpu *v) @@ -916,8 +916,8 @@ static void vmx_ctxt_switch_to(struct vcpu *v) vmx_restore_guest_msrs(v); vmx_restore_dr(v); - if ( v->domain->arch.hvm_domain.vmx.pi_switch_to ) - v->domain->arch.hvm_domain.vmx.pi_switch_to(v); + if ( v->domain->arch.hvm_domain.pi_ops.pi_switch_to ) + v->domain->arch.hvm_domain.pi_ops.pi_switch_to(v); } @@ -3914,8 +3914,8 @@ void vmx_vmenter_helper(const struct cpu_user_regs *regs) struct hvm_vcpu_asid *p_asid; bool_t need_flush; - if ( curr->domain->arch.hvm_domain.vmx.pi_do_resume ) - curr->domain->arch.hvm_domain.vmx.pi_do_resume(curr); + if ( curr->domain->arch.hvm_domain.pi_ops.pi_do_resume ) + curr->domain->arch.hvm_domain.pi_ops.pi_do_resume(curr); if ( !cpu_has_vmx_vpid ) goto out; diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index f34d784..779927b 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -72,6 +72,67 @@ struct hvm_ioreq_server { bool_t bufioreq_atomic; }; +struct hvm_pi_ops { + /* + * To handle posted interrupts correctly, we need to set the following + * state: + * + * * The PI notification vector (NV) + * * The PI notification destination processor (NDST) + * * The PI "suppress notification" bit (SN) + * * The vcpu pi "blocked" list + * + * If a VM is currently running, we want the PI delivered to the guest vcpu + * on the proper pcpu (NDST = v->processor, SN clear). + * + * If the vm is blocked, we want the PI delivered to Xen so that it can + * wake it up (SN clear, NV = pi_wakeup_vector, vcpu on block list). + * + * If the VM is currently either preempted or offline (i.e., not running + * because of some reason other than blocking waiting for an interrupt), + * there's nothing Xen can do -- we want the interrupt pending bit set in + * the guest, but we don't want to bother Xen with an interrupt (SN clear). + * + * There's a brief window of time between vmx_intr_assist() and checking + * softirqs where if an interrupt comes in it may be lost; so we need Xen + * to get an interrupt and raise a softirq so that it will go through the + * vmx_intr_assist() path again (SN clear, NV = posted_interrupt). + * + * The way we implement this now is by looking at what needs to happen on + * the following runstate transitions: + * + * A: runnable -> running + * - SN = 0 + * - NDST = v->processor + * B: running -> runnable + * - SN = 1 + * C: running -> blocked + * - NV = pi_wakeup_vector + * - Add vcpu to blocked list + * D: blocked -> runnable + * - NV = posted_intr_vector + * - Take vcpu off blocked list + * + * For transitions A and B, we add hooks into vmx_ctxt_switch_{from,to} + * paths. + * + * For transition C, we add a new arch hook, arch_vcpu_block(), which is + * called from vcpu_block() and vcpu_do_poll(). + * + * For transition D, rather than add an extra arch hook on vcpu_wake, we + * add a hook on the vmentry path which checks to see if either of the two + * actions need to be taken. + * + * These hooks only need to be called when the domain in question actually + * has a physical device assigned to it, so we set and clear the callbacks + * as appropriate when device assignment changes. + */ + void (*vcpu_block) (struct vcpu *); + void (*pi_switch_from) (struct vcpu *v); + void (*pi_switch_to) (struct vcpu *v); + void (*pi_do_resume) (struct vcpu *v); +}; + struct hvm_domain { /* Guest page range used for non-default ioreq servers */ struct { @@ -148,6 +209,8 @@ struct hvm_domain { struct list_head list; } write_map; + struct hvm_pi_ops pi_ops; + union { struct vmx_domain vmx; struct svm_domain svm; diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index 81b60d5..c832d9a 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -621,8 +621,8 @@ unsigned long hvm_cr4_guest_reserved_bits(const struct vcpu *v, bool_t restore); struct vcpu *v_ = (v); \ struct domain *d_ = v_->domain; \ if ( has_hvm_container_domain(d_) && \ - (cpu_has_vmx && d_->arch.hvm_domain.vmx.vcpu_block) ) \ - d_->arch.hvm_domain.vmx.vcpu_block(v_); \ + (d_->arch.hvm_domain.pi_ops.vcpu_block) ) \ + d_->arch.hvm_domain.pi_ops.vcpu_block(v_); \ }) #endif /* __ASM_X86_HVM_HVM_H__ */ diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h index 997f4f5..4ec8b08 100644 --- a/xen/include/asm-x86/hvm/vmx/vmcs.h +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h @@ -77,65 +77,6 @@ struct vmx_domain { unsigned long apic_access_mfn; /* VMX_DOMAIN_* */ unsigned int status; - - /* - * To handle posted interrupts correctly, we need to set the following - * state: - * - * * The PI notification vector (NV) - * * The PI notification destination processor (NDST) - * * The PI "suppress notification" bit (SN) - * * The vcpu pi "blocked" list - * - * If a VM is currently running, we want the PI delivered to the guest vcpu - * on the proper pcpu (NDST = v->processor, SN clear). - * - * If the vm is blocked, we want the PI delivered to Xen so that it can - * wake it up (SN clear, NV = pi_wakeup_vector, vcpu on block list). - * - * If the VM is currently either preempted or offline (i.e., not running - * because of some reason other than blocking waiting for an interrupt), - * there's nothing Xen can do -- we want the interrupt pending bit set in - * the guest, but we don't want to bother Xen with an interrupt (SN clear). - * - * There's a brief window of time between vmx_intr_assist() and checking - * softirqs where if an interrupt comes in it may be lost; so we need Xen - * to get an interrupt and raise a softirq so that it will go through the - * vmx_intr_assist() path again (SN clear, NV = posted_interrupt). - * - * The way we implement this now is by looking at what needs to happen on - * the following runstate transitions: - * - * A: runnable -> running - * - SN = 0 - * - NDST = v->processor - * B: running -> runnable - * - SN = 1 - * C: running -> blocked - * - NV = pi_wakeup_vector - * - Add vcpu to blocked list - * D: blocked -> runnable - * - NV = posted_intr_vector - * - Take vcpu off blocked list - * - * For transitions A and B, we add hooks into vmx_ctxt_switch_{from,to} - * paths. - * - * For transition C, we add a new arch hook, arch_vcpu_block(), which is - * called from vcpu_block() and vcpu_do_poll(). - * - * For transition D, rather than add an extra arch hook on vcpu_wake, we - * add a hook on the vmentry path which checks to see if either of the two - * actions need to be taken. - * - * These hooks only need to be called when the domain in question actually - * has a physical device assigned to it, so we set and clear the callbacks - * as appropriate when device assignment changes. - */ - void (*vcpu_block) (struct vcpu *); - void (*pi_switch_from) (struct vcpu *v); - void (*pi_switch_to) (struct vcpu *v); - void (*pi_do_resume) (struct vcpu *v); }; struct pi_desc {