Message ID | 20241010193705.10362-1-mario.limonciello@amd.com |
---|---|
Headers | show |
Series | Add support for AMD hardware feedback interface | expand |
On Thu, Oct 10, 2024 at 02:36:53PM -0500, Mario Limonciello wrote: > +====================================================================== > +Hardware Feedback Interface For Hetero Core Scheduling On AMD Platform > +====================================================================== > + > +:Copyright (C) 2024 Advanced Micro Devices, Inc. All Rights Reserved. > + > +:Author: Perry Yuan <perry.yuan@amd.com> Don't forget to correct the copyright reST field: diff --git a/Documentation/arch/x86/amd-hfi.rst b/Documentation/arch/x86/amd-hfi.rst index 5ada5c5b79f4b5..82811be984799d 100644 --- a/Documentation/arch/x86/amd-hfi.rst +++ b/Documentation/arch/x86/amd-hfi.rst @@ -4,7 +4,7 @@ Hardware Feedback Interface For Hetero Core Scheduling On AMD Platform ====================================================================== -:Copyright (C) 2024 Advanced Micro Devices, Inc. All Rights Reserved. +:Copyright: 2024 Advanced Micro Devices, Inc. All Rights Reserved. :Author: Perry Yuan <perry.yuan@amd.com> > + > +Overview > +-------- > + > +AMD Heterogeneous Core implementations are comprised of more than one > +architectural class and CPUs are comprised of cores of various efficiency > +and power capabilities. Power management strategies must be designed to accommodate > +the complexities introduced by incorporating different core types. > +Heterogeneous systems can also extend to more than two architectural classes as well. > +The purpose of the scheduling feedback mechanism is to provide information to > +the operating system scheduler in real time such that the scheduler can direct > +threads to the optimal core. > + > +``Classic cores`` are generally more performant and ``Dense cores`` are generally more > +efficient. > +The goal of AMD's heterogeneous architecture is to attain power benefit by sending > +background thread to the dense cores while sending high priority threads to the classic > +cores. From a performance perspective, sending background threads to dense cores can free > +up power headroom and allow the classic cores to optimally service demanding threads. > +Furthermore, the area optimized nature of the dense cores allows for an increasing > +number of physical cores. This improved core density will have positive multithreaded > +performance impact. > + > <snipped>... > + > +The mechanism used to trigger a table update like below events: > + * Thermal Stress Events > + * Silent Compute > + * Extreme Low Battery Scenarios What about below wording? ---- >8 ---- diff --git a/Documentation/arch/x86/amd-hfi.rst b/Documentation/arch/x86/amd-hfi.rst index 351641ce28213c..5ada5c5b79f4b5 100644 --- a/Documentation/arch/x86/amd-hfi.rst +++ b/Documentation/arch/x86/amd-hfi.rst @@ -12,16 +12,15 @@ Overview -------- AMD Heterogeneous Core implementations are comprised of more than one -architectural class and CPUs are comprised of cores of various efficiency -and power capabilities. Power management strategies must be designed to accommodate -the complexities introduced by incorporating different core types. -Heterogeneous systems can also extend to more than two architectural classes as well. -The purpose of the scheduling feedback mechanism is to provide information to -the operating system scheduler in real time such that the scheduler can direct -threads to the optimal core. +architectural class and CPUs are comprised of cores of various efficiency and +power capabilities: performance-oriented *classic cores* and power-efficient +*dense cores*. As such, power management strategies must be designed to +accommodate the complexities introduced by incorporating different core types. +Heterogeneous systems can also extend to more than two architectural classes as +well. The purpose of the scheduling feedback mechanism is to provide +information to the operating system scheduler in real time such that the +scheduler can direct threads to the optimal core. -``Classic cores`` are generally more performant and ``Dense cores`` are generally more -efficient. The goal of AMD's heterogeneous architecture is to attain power benefit by sending background thread to the dense cores while sending high priority threads to the classic cores. From a performance perspective, sending background threads to dense cores can free @@ -78,7 +77,8 @@ Power Management FW is responsible for detecting events that would require a reordering of the performance and efficiency ranking. Table updates would happen relatively infrequently and occur on the time scale of seconds or more. -The mechanism used to trigger a table update like below events: +The following events trigger a table update: + * Thermal Stress Events * Silent Compute * Extreme Low Battery Scenarios > diff --git a/Documentation/arch/x86/index.rst b/Documentation/arch/x86/index.rst > index 8ac64d7de4dc..7f47229f3104 100644 > --- a/Documentation/arch/x86/index.rst > +++ b/Documentation/arch/x86/index.rst > @@ -43,3 +43,4 @@ x86-specific Documentation > features > elf_auxvec > xstate > + amd_hfi Sphinx reports mismatched toctree entry name: Documentation/arch/x86/index.rst:7: WARNING: toctree contains reference to nonexisting document 'arch/x86/amd_hfi' I have to fix it up: ---- >8 ---- diff --git a/Documentation/arch/x86/index.rst b/Documentation/arch/x86/index.rst index 7f47229f3104e1..56f2923f52597c 100644 --- a/Documentation/arch/x86/index.rst +++ b/Documentation/arch/x86/index.rst @@ -43,4 +43,4 @@ x86-specific Documentation features elf_auxvec xstate - amd_hfi + amd-hfi Thanks.
On Thu, 10 Oct 2024, Mario Limonciello wrote: > From: Perry Yuan <Perry.Yuan@amd.com> > > When `amd_hfi` driver is loaded, it will use PCCT subspace type 4 table > to retrieve the shared memory address which contains the CPU core ranking > table. This table includes a header that specifies the number of ranking > data entries to be parsed and rank each CPU core with the Performance and > Energy Efficiency capability as implemented by the CPU power management > firmware. > > Once the table has been parsed, each CPU is assigned a ranking score > within its class. Subsequently, when the scheduler selects cores, it > chooses from the ranking list based on the assigned scores in each class, > thereby ensuring the optimal selection of CPU cores according to their > predefined classifications and priorities. > > Signed-off-by: Perry Yuan <Perry.Yuan@amd.com> > Co-developed-by: Mario Limonciello <mario.limonciello@amd.com> > Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> > --- > v2: > * Rework amd_hfi_fill_metatadata to directly use structure instead of > pointer math. > --- > drivers/platform/x86/amd/hfi/hfi.c | 215 ++++++++++++++++++++++++++++- > 1 file changed, 212 insertions(+), 3 deletions(-) > > diff --git a/drivers/platform/x86/amd/hfi/hfi.c b/drivers/platform/x86/amd/hfi/hfi.c > index da2e667107e8..10651399cf75 100644 > --- a/drivers/platform/x86/amd/hfi/hfi.c > +++ b/drivers/platform/x86/amd/hfi/hfi.c > @@ -18,22 +18,78 @@ > #include <linux/io.h> > #include <linux/kernel.h> > #include <linux/module.h> > +#include <linux/mailbox_client.h> > #include <linux/mutex.h> > +#include <linux/percpu-defs.h> > #include <linux/platform_device.h> > #include <linux/printk.h> > #include <linux/smp.h> > #include <linux/string.h> > +#include <linux/topology.h> > +#include <linux/workqueue.h> > + > +#include <asm/cpu_device_id.h> > + > +#include <acpi/pcc.h> > +#include <acpi/cppc_acpi.h> > > #define AMD_HFI_DRIVER "amd_hfi" > +#define AMD_HFI_MAILBOX_COUNT 1 > +#define AMD_HETERO_RANKING_TABLE_VER 2 > + > #define AMD_HETERO_CPUID_27 0x80000027 > + > static struct platform_device *device; > > +/** > + * struct amd_shmem_info - Shared memory table for AMD HFI > + * > + * @signature: The PCC signature. The signature of a subspace is computed by > + * a bitwise of the value 0x50434300 with the subspace ID. > + * @flags: Notify on completion > + * @length: Length of payload being transmitted including command field > + * @command: Command being sent over the subspace > + * @version_number: Version number of the table > + * @n_logical_processors: Number of logical processors > + * @n_capabilities: Number of ranking dimensions (performance, efficiency, etc) > + * @table_update_context: Command being sent over the subspace > + * @n_bitmaps: Number of 32-bit bitmaps to enumerate all the APIC IDs > + * This is based on the maximum APIC ID enumerated in the system > + * @reserved: 24 bit spare > + * @table_data: Bit Map(s) of enabled logical processors > + * Followed by the ranking data for each logical processor > + */ > +struct amd_shmem_info { > + struct acpi_pcct_ext_pcc_shared_memory header; > + u32 version_number :8, > + n_logical_processors :8, > + n_capabilities :8, > + table_update_context :8; > + u32 n_bitmaps :8, > + reserved :24; > + u32 table_data[]; > +} __packed; > + > struct amd_hfi_data { > const char *name; > struct device *dev; > struct mutex lock; > + > + /* PCCT table related*/ > + struct pcc_mbox_chan *pcc_chan; > + void __iomem *pcc_comm_addr; > + struct acpi_subtable_header *pcct_entry; > + struct amd_shmem_info *shmem; > }; > > +/** > + * struct amd_hfi_classes - HFI class capabilities per CPU > + * @perf: Performance capability > + * @eff: Power efficiency capability > + * > + * Capabilities of a logical processor in the ranking table. These capabilities > + * are unitless and specific to each HFI class. > + */ > struct amd_hfi_classes { > u32 perf; > u32 eff; > @@ -42,23 +98,105 @@ struct amd_hfi_classes { > /** > * struct amd_hfi_cpuinfo - HFI workload class info per CPU > * @cpu: cpu index > + * @apic_id: apic id of the current cpu > * @cpus: mask of cpus associated with amd_hfi_cpuinfo > * @class_index: workload class ID index > * @nr_class: max number of workload class supported > + * @ipcc_scores: ipcc scores for each class > * @amd_hfi_classes: current cpu workload class ranking data > * > * Parameters of a logical processor linked with hardware feedback class > */ > struct amd_hfi_cpuinfo { > int cpu; > + u32 apic_id; > cpumask_var_t cpus; > s16 class_index; > u8 nr_class; > + int *ipcc_scores; > struct amd_hfi_classes *amd_hfi_classes; > }; > > static DEFINE_PER_CPU(struct amd_hfi_cpuinfo, amd_hfi_cpuinfo) = {.class_index = -1}; > > +static int find_cpu_index_by_apicid(unsigned int target_apicid) > +{ > + int cpu_index; > + > + for_each_possible_cpu(cpu_index) { > + struct cpuinfo_x86 *info = &cpu_data(cpu_index); > + > + if (info->topo.apicid == target_apicid) { > + pr_debug("match APIC id %d for CPU index: %d", Missing \n > + info->topo.apicid, cpu_index); > + return cpu_index; > + } > + } > + > + return -ENODEV; > +} > + > +static int amd_hfi_fill_metadata(struct amd_hfi_data *amd_hfi_data) > +{ > + struct acpi_pcct_ext_pcc_slave *pcct_ext = > + (struct acpi_pcct_ext_pcc_slave *)amd_hfi_data->pcct_entry; > + void __iomem *pcc_comm_addr; > + > + pcc_comm_addr = acpi_os_ioremap(amd_hfi_data->pcc_chan->shmem_base_addr, > + amd_hfi_data->pcc_chan->shmem_size); > + if (!pcc_comm_addr) { > + pr_err("failed to ioremap PCC common region mem\n"); > + return -ENOMEM; > + } > + > + memcpy_fromio(amd_hfi_data->shmem, pcc_comm_addr, pcct_ext->length); > + iounmap(pcc_comm_addr); > + > + if (amd_hfi_data->shmem->header.signature != PCC_SIGNATURE) { > + pr_err("Invalid signature in shared memory\n"); > + return -EINVAL; > + } > + if (amd_hfi_data->shmem->version_number != AMD_HETERO_RANKING_TABLE_VER) { > + pr_err("Invalid veresion %d\n", amd_hfi_data->shmem->version_number); version > + return -EINVAL; > + } > + > + for (u32 i = 0; i < amd_hfi_data->shmem->n_bitmaps; i++) { > + u32 bitmap = amd_hfi_data->shmem->table_data[i]; > + > + for (u32 j = 0; j < BITS_PER_TYPE(u32); j++) { Are these u32 really the types you want to use for the loop vars, why? > + struct amd_hfi_cpuinfo *info; > + int apic_id = i * BITS_PER_TYPE(u32) + j; > + int cpu_index; > + > + if (!(bitmap & BIT(j))) > + continue; > + > + cpu_index = find_cpu_index_by_apicid(apic_id); > + if (cpu_index < 0) { > + pr_warn("APIC ID %d not found\n", apic_id); > + continue; > + } > + > + info = per_cpu_ptr(&amd_hfi_cpuinfo, cpu_index); > + info->apic_id = apic_id; > + > + /* Fill the ranking data for each logical processor */ > + info = per_cpu_ptr(&amd_hfi_cpuinfo, cpu_index); > + for (int k = 0; k < info->nr_class; k++) { unsigned int > + u32 *table = amd_hfi_data->shmem->table_data + > + amd_hfi_data->shmem->n_bitmaps + > + i * info->nr_class; > + > + info->amd_hfi_classes[k].eff = table[apic_id + 2 * k]; > + info->amd_hfi_classes[k].perf = table[apic_id + 2 * k + 1]; > + } > + } > + } > + > + return 0; > +} > + > static int amd_hfi_alloc_class_data(struct platform_device *pdev) > { > struct amd_hfi_cpuinfo *hfi_cpuinfo; > @@ -68,8 +206,7 @@ static int amd_hfi_alloc_class_data(struct platform_device *pdev) > > nr_class_id = cpuid_eax(AMD_HETERO_CPUID_27); > if (nr_class_id < 0 || nr_class_id > 255) { > - dev_warn(dev, "failed to get supported class number from CPUID %d\n", > - AMD_HETERO_CPUID_27); > + dev_warn(dev, "failed to get number of supported classes\n"); This message was added in the previous patch and now immediately changed. > return -EINVAL; > } > > @@ -79,7 +216,10 @@ static int amd_hfi_alloc_class_data(struct platform_device *pdev) > sizeof(struct amd_hfi_classes), GFP_KERNEL); > if (!hfi_cpuinfo->amd_hfi_classes) > return -ENOMEM; > - > + hfi_cpuinfo->ipcc_scores = devm_kcalloc(dev, nr_class_id, > + sizeof(int), GFP_KERNEL); > + if (!hfi_cpuinfo->ipcc_scores) > + return -ENOMEM; > hfi_cpuinfo->nr_class = nr_class_id; > } > > @@ -93,6 +233,70 @@ static void amd_hfi_remove(struct platform_device *pdev) > mutex_destroy(&dev->lock); > } > > +static int amd_hfi_metadata_parser(struct platform_device *pdev, > + struct amd_hfi_data *amd_hfi_data) > +{ > + struct acpi_pcct_ext_pcc_slave *pcct_ext; > + struct acpi_subtable_header *pcct_entry; > + struct mbox_chan *pcc_mbox_channels; > + struct acpi_table_header *pcct_tbl; > + struct pcc_mbox_chan *pcc_chan; > + acpi_status status; > + int ret; > + > + pcc_mbox_channels = devm_kcalloc(&pdev->dev, AMD_HFI_MAILBOX_COUNT, > + sizeof(*pcc_mbox_channels), GFP_KERNEL); > + if (!pcc_mbox_channels) { > + ret = -ENOMEM; > + goto out; Please return directly if there is nothing to rollback. > + } > + > + pcc_chan = devm_kcalloc(&pdev->dev, AMD_HFI_MAILBOX_COUNT, > + sizeof(*pcc_chan), GFP_KERNEL); > + if (!pcc_chan) { > + ret = -ENOMEM; > + goto out; Ditto. > + } > + > + status = acpi_get_table(ACPI_SIG_PCCT, 0, &pcct_tbl); > + if (ACPI_FAILURE(status) || !pcct_tbl) { > + ret = -ENODEV; > + goto out; Ditto. > + } > + > + /* get pointer to the first PCC subspace entry */ > + pcct_entry = (struct acpi_subtable_header *) ( > + (unsigned long)pcct_tbl + sizeof(struct acpi_table_pcct)); > + > + pcc_chan->mchan = &pcc_mbox_channels[0]; > + > + amd_hfi_data->pcc_chan = pcc_chan; > + amd_hfi_data->pcct_entry = pcct_entry; > + pcct_ext = (struct acpi_pcct_ext_pcc_slave *)pcct_entry; > + > + if (pcct_ext->length <= 0) { > + ret = -EINVAL; > + goto out; Ditto. > + } > + > + amd_hfi_data->shmem = devm_kmalloc(amd_hfi_data->dev, pcct_ext->length, GFP_KERNEL); Why kmalloc ? > + if (!amd_hfi_data->shmem) { > + ret = -ENOMEM; > + goto out; Return directly. > + } > + > + pcc_chan->shmem_base_addr = pcct_ext->base_address; > + pcc_chan->shmem_size = pcct_ext->length; > + > + /* parse the shared memory info from the pcct table */ > + ret = amd_hfi_fill_metadata(amd_hfi_data); > + > + acpi_put_table(pcct_tbl); > + > +out: > + return ret; > +} > + > static const struct acpi_device_id amd_hfi_platform_match[] = { > { "AMDI0104", 0}, > { } > @@ -121,6 +325,11 @@ static int amd_hfi_probe(struct platform_device *pdev) > if (ret) > goto out; This should do return ret; directly, not jump to out label which does nothing but return. > > + /* parse PCCT table */ > + ret = amd_hfi_metadata_parser(pdev, amd_hfi_data); > + if (ret) > + goto out; > + > out: > return ret; Might again be there for churn avoidance, otherwise, please consider: return amd_hfi_metadata_parser(pdev, amd_hfi_data); That goto out should again just return ret directly. > } >
On Thu, 10 Oct 2024, Mario Limonciello wrote: > From: Perry Yuan <Perry.Yuan@amd.com> > > There are some firmware parameters that need to be configured > when a CPU core is brought online or offline. > > when CPU is online, it will initialize the workload classification > parameters to CPU firmware which will trigger the workload class ID > updating function. > > Once the CPU is going to offline, it will need to disable the workload > classification function and clear the history. > > Signed-off-by: Perry Yuan <Perry.Yuan@amd.com> > Co-developed-by: Mario Limonciello <mario.limonciello@amd.com> > Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> > --- > v2: > * Rebase > --- > drivers/platform/x86/amd/hfi/hfi.c | 90 +++++++++++++++++++++++++++++- > 1 file changed, 89 insertions(+), 1 deletion(-) > > diff --git a/drivers/platform/x86/amd/hfi/hfi.c b/drivers/platform/x86/amd/hfi/hfi.c > index c0065ba0ed18..c969ee7ea5ee 100644 > --- a/drivers/platform/x86/amd/hfi/hfi.c > +++ b/drivers/platform/x86/amd/hfi/hfi.c > @@ -244,6 +244,89 @@ static int amd_set_hfi_ipcc_score(struct amd_hfi_cpuinfo *hfi_cpuinfo, int cpu) > return 0; > } > > +static int amd_hfi_set_state(unsigned int cpu, bool state) > +{ > + int ret; > + > + ret = wrmsrl_on_cpu(cpu, AMD_WORKLOAD_CLASS_CONFIG, state); > + if (ret) > + return ret; > + > + return wrmsrl_on_cpu(cpu, AMD_WORKLOAD_HRST, 0x1); > +} > + > +/** > + * amd_hfi_online() - Enable workload classification on @cpu > + * @cpu: CPU in which the workload classification will be enabled > + * > + * Return: 0 on success, negative error code on failure > + */ > +static int amd_hfi_online(unsigned int cpu) > +{ > + struct amd_hfi_cpuinfo *hfi_info = per_cpu_ptr(&amd_hfi_cpuinfo, cpu); > + struct amd_hfi_classes *hfi_classes; > + int ret; > + > + if (WARN_ON_ONCE(!hfi_info)) > + return -EINVAL; > + > + if (!zalloc_cpumask_var(&hfi_info->cpus, GFP_KERNEL)) > + return -ENOMEM; > + > + mutex_lock(&hfi_cpuinfo_lock); Use guard() > + cpumask_set_cpu(cpu, hfi_info->cpus); > + > + /* > + * Check if @cpu as an associated, initialized and ranking data must be filled > + */ > + hfi_classes = hfi_info->amd_hfi_classes; > + if (!hfi_classes) > + goto unlock; > + > + /* Enable the workload classification interface */ > + ret = amd_hfi_set_state(cpu, true); > + if (ret) > + pr_err("wct enable failed for cpu %d\n", cpu); CPU Should wct too be capitalized? Is it okay to return 0 when this error occurs? > + > + mutex_unlock(&hfi_cpuinfo_lock); > + return 0; > + > +unlock: > + free_cpumask_var(hfi_info->cpus); > + mutex_unlock(&hfi_cpuinfo_lock); > + return ret; > +} > + > +/** > + * amd_hfi_offline() - Disable workload classification on @cpu > + * @cpu: CPU in which the workload classification will be disabled > + * > + * Remove @cpu from those covered by its HFI instance. > + * > + * Return: 0 on success, negative error code on failure > + */ > +static int amd_hfi_offline(unsigned int cpu) > +{ > + struct amd_hfi_cpuinfo *hfi_info = &per_cpu(amd_hfi_cpuinfo, cpu); > + int ret; > + > + if (WARN_ON_ONCE(!hfi_info)) > + return -EINVAL; > + > + mutex_lock(&hfi_cpuinfo_lock); guard or scoped_guard. > + > + /* Disable the workload classification interface */ > + ret = amd_hfi_set_state(cpu, false); > + if (ret) > + pr_err("wct disable failed for cpu %d\n", cpu); > + > + mutex_unlock(&hfi_cpuinfo_lock); > + > + free_cpumask_var(hfi_info->cpus); > + > + return 0; > +} > + > static int update_hfi_ipcc_scores(struct amd_hfi_data *amd_hfi_data) > { > int cpu; > @@ -362,8 +445,13 @@ static int amd_hfi_probe(struct platform_device *pdev) > if (ret) > goto out; > > + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/amd_hfi:online", > + amd_hfi_online, amd_hfi_offline); > + if (ret < 0) > + goto out; return ret; > + > out: > - return ret; > + return ret < 0 ? ret : 0; > } > > static struct platform_driver amd_hfi_driver = { >
On Thu, 10 Oct 2024, Mario Limonciello wrote: > From: Perry Yuan <Perry.Yuan@amd.com> > > Introduces power management callbacks for the `amd_hfi` driver. > Specifically, the `suspend` and `resume` callbacks have been added > to handle the necessary operations during system low power states > and wake-up. > > Signed-off-by: Perry Yuan <Perry.Yuan@amd.com> > Co-developed-by: Mario Limonciello <mario.limonciello@amd.com> > Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> > --- > v2: > * Whitespace changes > * Use on online CPUs not present ones > --- > drivers/platform/x86/amd/hfi/hfi.c | 33 ++++++++++++++++++++++++++++++ > 1 file changed, 33 insertions(+) > > diff --git a/drivers/platform/x86/amd/hfi/hfi.c b/drivers/platform/x86/amd/hfi/hfi.c > index c969ee7ea5ee..0263993b0a94 100644 > --- a/drivers/platform/x86/amd/hfi/hfi.c > +++ b/drivers/platform/x86/amd/hfi/hfi.c > @@ -407,6 +407,38 @@ static int amd_hfi_metadata_parser(struct platform_device *pdev, > return ret; > } > > +static int amd_hfi_pm_resume(struct device *dev) > +{ > + int ret, cpu; > + > + for_each_present_cpu(cpu) { > + ret = amd_hfi_set_state(cpu, true); > + if (ret < 0) { > + dev_err(dev, "failed to enable workload class config: %d\n", ret); > + return ret; > + } > + } > + > + return 0; > +} > + > +static int amd_hfi_pm_suspend(struct device *dev) > +{ > + int ret, cpu; > + > + for_each_online_cpu(cpu) { > + ret = amd_hfi_set_state(cpu, false); > + if (ret < 0) { > + dev_err(dev, "failed to disable workload class config: %d\n", ret); > + return ret; > + } > + } > + > + return 0; > +} > + > +static DEFINE_SIMPLE_DEV_PM_OPS(amd_hfi_pm_ops, amd_hfi_pm_suspend, amd_hfi_pm_resume); > + > static const struct acpi_device_id amd_hfi_platform_match[] = { > { "AMDI0104", 0}, > { } > @@ -458,6 +490,7 @@ static struct platform_driver amd_hfi_driver = { > .driver = { > .name = AMD_HFI_DRIVER, > .owner = THIS_MODULE, > + .pm = &amd_hfi_pm_ops, This is inconsistent. > .acpi_match_table = ACPI_PTR(amd_hfi_platform_match), > }, > .probe = amd_hfi_probe, >