Message ID | 1405615260-29291-1-git-send-email-parth.dixit@linaro.org |
---|---|
State | New |
Headers | show |
On Thu, 2014-07-17 at 22:11 +0530, Parth Dixit wrote: > Changelog v5 > - removed PSCI_ARGS macro > - preload error value for calls that do not return > - added new macro for 32 bit arguments > - casting return calls to appropriate types > - added new function psci_mode_check > - added error checks for PSCI v0.2 function id's Looking good. I've a couple of questions which I hope you can answer. > +static void do_trap_psci(struct cpu_user_regs *regs) > +{ > + register_t fid = PSCI_ARG(regs,0); > > - psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn; > - if ( psci_call == NULL ) > + /* preloading in case psci_mode_check fails */ > + PSCI_RESULT_REG(regs) = (int32_t)PSCI_INVALID_PARAMETERS; > [...] > + case PSCI_0_2_FN_MIGRATE_INFO_UP_CPU: > + if ( psci_mode_check(current->domain, fid) ) > + PSCI_RESULT_REG(regs) = > + (uint32_t) do_psci_0_2_migrate_info_up_cpu(); > + else > + PSCI_RESULT_REG(regs) = (uint32_t)PSCI_INVALID_PARAMETERS; > + break; [...] > + case PSCI_0_2_FN_CPU_ON: > + case PSCI_0_2_FN64_CPU_ON: > + if ( psci_mode_check(current->domain, fid) ) > + { > + register_t vcpuid = PSCI_ARG(regs,1); > + register_t epoint = PSCI_ARG(regs,2); > + register_t cid = PSCI_ARG(regs,3); > + PSCI_RESULT_REG(regs) = > + (int32_t) do_psci_0_2_cpu_on(vcpuid, epoint, cid); Indentation here is broken. I notice that sometimes you have an else case setting PSCI_INVALID_PARAMETERS and other times you don't. Is that just an oversight or is there a reason why they should differ? I don't think you need to cast the PSCI_* result codes BTW. [...] > +int32_t do_psci_0_2_cpu_suspend(uint32_t power_state, register_t entry_point, > + register_t context_id) > +{ > + struct vcpu *v = current; > + struct domain *d = v->domain; > + struct cpu_user_regs *regs = &v->arch.cpu_info->guest_cpu_user_regs; I think it would make sense to pass this as an argument, but if not then guest_cpu_user_regs() is the accessor to use. > + > + if ( is_32bit_domain(d) ) > + { > + regs->pc32 = entry_point; > + regs->r0 = context_id; > + } > +#ifdef CONFIG_ARM_64 > + else > + { > + regs->pc = entry_point; > + regs->x0 = context_id; > + } > +#endif > + vcpu_block_unless_event_pending(v); > + return PSCI_SUCCESS; This only seems to be handling the case where the requested power state is power down and not the standby state. By my reading when bit 16 if the power_state argument is 0 (standby) then entry_point and context_id are ignored and the PSCI call is expected to return success to the place it was called from, not jump to entry_point. If bit 16 is one you are supposed to return to entry_point with x0/r0 set to context_id, but because you return PSCI_SUCCESS that will get written to x0/r0 by the generic handler clobbering your write of x0/r0 above. You should probably return the desired value (with an explanatory comment) The upper bits of power_state also describe an affinity level to put to sleep, which you also don't handle AFAICT. Perhaps since we don't actually have affinities > 0 what you've done is OK, what do you think? The lower bits are supposed to be some sort of per platform ID which we are supposed to have exposed via tables. I'm not sure there are any DTB bindings for that, so perhaps we can ignore it. The handling of bit 16 is the only bit which really concerns me here, the rest seem like things we can tolerate. Ian.
forgot to add xen-devel list... ---------- Forwarded message ---------- From: Parth Dixit <parth.dixit@linaro.org> Date: 25 July 2014 11:05 Subject: Re: [PATCH v5] xen/arm : emulation of arm's PSCI v0.2 standard To: Ian Campbell <Ian.Campbell@citrix.com> On 24 July 2014 19:36, Ian Campbell <Ian.Campbell@citrix.com> wrote: > On Thu, 2014-07-17 at 22:11 +0530, Parth Dixit wrote: >> Changelog v5 >> - removed PSCI_ARGS macro >> - preload error value for calls that do not return >> - added new macro for 32 bit arguments >> - casting return calls to appropriate types >> - added new function psci_mode_check >> - added error checks for PSCI v0.2 function id's > > Looking good. I've a couple of questions which I hope you can answer. > >> +static void do_trap_psci(struct cpu_user_regs *regs) >> +{ >> + register_t fid = PSCI_ARG(regs,0); >> >> - psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn; >> - if ( psci_call == NULL ) >> + /* preloading in case psci_mode_check fails */ >> + PSCI_RESULT_REG(regs) = (int32_t)PSCI_INVALID_PARAMETERS; >> [...] >> + case PSCI_0_2_FN_MIGRATE_INFO_UP_CPU: >> + if ( psci_mode_check(current->domain, fid) ) >> + PSCI_RESULT_REG(regs) = >> + (uint32_t) do_psci_0_2_migrate_info_up_cpu(); >> + else >> + PSCI_RESULT_REG(regs) = (uint32_t)PSCI_INVALID_PARAMETERS; >> + break; > [...] >> + case PSCI_0_2_FN_CPU_ON: >> + case PSCI_0_2_FN64_CPU_ON: >> + if ( psci_mode_check(current->domain, fid) ) >> + { >> + register_t vcpuid = PSCI_ARG(regs,1); >> + register_t epoint = PSCI_ARG(regs,2); >> + register_t cid = PSCI_ARG(regs,3); >> + PSCI_RESULT_REG(regs) = >> + (int32_t) do_psci_0_2_cpu_on(vcpuid, epoint, cid); > > Indentation here is broken. i will fix in patchset v6 > > I notice that sometimes you have an else case setting > PSCI_INVALID_PARAMETERS and other times you don't. Is that just an > oversight or is there a reason why they should differ? It was done because rest of the functions were using default preloaded value with return type int32 , functions with else case are functions that have return type other than int32(uint32 or register_t) but as you have commented further if PSCI* result codes do not require casting, this will go away. To summarize i will remove the casting of result codes this will also do away the need of else statements in patchset v6. > I don't think you need to cast the PSCI_* result codes BTW. > > [...] >> +int32_t do_psci_0_2_cpu_suspend(uint32_t power_state, register_t entry_point, >> + register_t context_id) >> +{ >> + struct vcpu *v = current; >> + struct domain *d = v->domain; >> + struct cpu_user_regs *regs = &v->arch.cpu_info->guest_cpu_user_regs; > > I think it would make sense to pass this as an argument, but if not then > guest_cpu_user_regs() is the accessor to use. > sure, will take care in patchset v6 >> + >> + if ( is_32bit_domain(d) ) >> + { >> + regs->pc32 = entry_point; >> + regs->r0 = context_id; >> + } >> +#ifdef CONFIG_ARM_64 >> + else >> + { >> + regs->pc = entry_point; >> + regs->x0 = context_id; >> + } >> +#endif >> + vcpu_block_unless_event_pending(v); >> + return PSCI_SUCCESS; > > This only seems to be handling the case where the requested power state > is power down and not the standby state. > > By my reading when bit 16 if the power_state argument is 0 (standby) > then entry_point and context_id are ignored and the PSCI call is > expected to return success to the place it was called from, not jump to > entry_point. > > If bit 16 is one you are supposed to return to entry_point with x0/r0 > set to context_id, but because you return PSCI_SUCCESS that will get > written to x0/r0 by the generic handler clobbering your write of x0/r0 > above. You should probably return the desired value (with an explanatory > comment) valid point, i am just wondering why kvm implementation ignored these values, i will implement it in next patchset > The upper bits of power_state also describe an affinity level to put to > sleep, which you also don't handle AFAICT. Perhaps since we don't > actually have affinities > 0 what you've done is OK, what do you think? I think it is better to put a comment explicitly stating that we have ignored the value because affinities > 0 are not valid for xen or we can check for affinity and return if it is greater than zero. I think i'll go with check, will take care in next patchset > The lower bits are supposed to be some sort of per platform ID which we > are supposed to have exposed via tables. I'm not sure there are any DTB > bindings for that, so perhaps we can ignore it. > > The handling of bit 16 is the only bit which really concerns me here, > the rest seem like things we can tolerate. > > Ian. > Thanks parth
On Fri, 2014-07-25 at 12:40 +0530, Parth Dixit wrote: > > > > I notice that sometimes you have an else case setting > > PSCI_INVALID_PARAMETERS and other times you don't. Is that just an > > oversight or is there a reason why they should differ? > It was done because rest of the functions were using default preloaded > value with return type int32 , functions with else case are functions > that have return type other than int32(uint32 or register_t) but as > you have commented further if PSCI* result codes do not require > casting, this will go away. > To summarize i will remove the casting of result codes this will also > do away the need of else statements in patchset v6. Sounds good. > >> + > >> + if ( is_32bit_domain(d) ) > >> + { > >> + regs->pc32 = entry_point; > >> + regs->r0 = context_id; > >> + } > >> +#ifdef CONFIG_ARM_64 > >> + else > >> + { > >> + regs->pc = entry_point; > >> + regs->x0 = context_id; > >> + } > >> +#endif > >> + vcpu_block_unless_event_pending(v); > >> + return PSCI_SUCCESS; > > > > This only seems to be handling the case where the requested power state > > is power down and not the standby state. > > > > By my reading when bit 16 if the power_state argument is 0 (standby) > > then entry_point and context_id are ignored and the PSCI call is > > expected to return success to the place it was called from, not jump to > > entry_point. > > > > If bit 16 is one you are supposed to return to entry_point with x0/r0 > > set to context_id, but because you return PSCI_SUCCESS that will get > > written to x0/r0 by the generic handler clobbering your write of x0/r0 > > above. You should probably return the desired value (with an explanatory > > comment) > valid point, i am just wondering why kvm implementation ignored these values, It's possible that I'm being overly picky in reading the spec. My concern is that there isn't currently much in the way of reference implementations of this thing to compare against for real world behaviour. When such things turn up and behave differently to Xen (or KVM for that matter), perhaps because they interpret the ambiguities in the spec differently, and the spec is clarified we may end up having to change. Which then runs the risk of regressions for existing guests etc. > i will implement it in next patchset Thanks. > > The upper bits of power_state also describe an affinity level to put to > > sleep, which you also don't handle AFAICT. Perhaps since we don't > > actually have affinities > 0 what you've done is OK, what do you think? > I think it is better to put a comment explicitly stating that we have > ignored the value because affinities > 0 are not valid for xen or we > can check for affinity and return if it is greater than zero. > I think i'll go with check, will take care in next patchset I'm not 100% what the spec allows/requires us to do here. I've asked ARM. I suspect shutting down only the local core would be permissible. Ian.
On Fri, 2014-07-25 at 09:54 +0100, Ian Campbell wrote: > > > The upper bits of power_state also describe an affinity level to put to > > > sleep, which you also don't handle AFAICT. Perhaps since we don't > > > actually have affinities > 0 what you've done is OK, what do you think? > > > I think it is better to put a comment explicitly stating that we have > > ignored the value because affinities > 0 are not valid for xen or we > > can check for affinity and return if it is greater than zero. > > I think i'll go with check, will take care in next patchset > > I'm not 100% what the spec allows/requires us to do here. I've asked > ARM. I suspect shutting down only the local core would be permissible. I've asked ARM and this is fine. If the caller asks us to power down the "cluster" we don't have to actually go that deep and since we are virtualised it doesn't really matter. BTW, WRT to stand by vs power down (bit 16). For standby you always return to the next instruction after the SMC where as for power down you can return either to the next instruction or the entry point address. Since we don't actually implement hardware sleep states which don't preserve the full CPU state we may as well take the easy option of always returning to the next instruction. Ian.
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c index 4f0f0e2..e8bcd05 100644 --- a/tools/libxl/libxl_arm.c +++ b/tools/libxl/libxl_arm.c @@ -237,7 +237,7 @@ static int make_psci_node(libxl__gc *gc, void *fdt) res = fdt_begin_node(fdt, "psci"); if (res) return res; - res = fdt_property_compat(gc, fdt, 1, "arm,psci"); + res = fdt_property_compat(gc, fdt, 2, "arm,psci-0.2","arm,psci"); if (res) return res; res = fdt_property_string(fdt, "method", "hvc"); diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index c424793..ebd4170 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -388,6 +388,9 @@ static int make_hypervisor_node(struct domain *d, static int make_psci_node(void *fdt, const struct dt_device_node *parent) { int res; + const char compat[] = + "arm,psci-0.2""\0" + "arm,psci"; DPRINT("Create PSCI node\n"); @@ -396,7 +399,7 @@ static int make_psci_node(void *fdt, const struct dt_device_node *parent) if ( res ) return res; - res = fdt_property_string(fdt, "compatible", "arm,psci"); + res = fdt_property(fdt, "compatible", compat, sizeof(compat)); if ( res ) return res; diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 8102540..aed9f89 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1030,24 +1030,6 @@ static arm_hypercall_t arm_hypercall_table[] = { HYPERCALL_ARM(vcpu_op, 3), }; -typedef int (*arm_psci_fn_t)(uint32_t, register_t); - -typedef struct { - arm_psci_fn_t fn; - int nr_args; -} arm_psci_t; - -#define PSCI(_name, _nr_args) \ - [ PSCI_ ## _name ] = { \ - .fn = (arm_psci_fn_t) &do_psci_ ## _name, \ - .nr_args = _nr_args, \ - } - -static arm_psci_t arm_psci_table[] = { - PSCI(cpu_off, 1), - PSCI(cpu_on, 2), -}; - #ifndef NDEBUG static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code) { @@ -1080,33 +1062,116 @@ static void do_debug_trap(struct cpu_user_regs *regs, unsigned int code) #endif #ifdef CONFIG_ARM_64 -#define PSCI_OP_REG(r) (r)->x0 #define PSCI_RESULT_REG(r) (r)->x0 -#define PSCI_ARGS(r) (r)->x1, (r)->x2 +#define PSCI_ARG(r,n) (r)->x##n +#define PSCI_ARG32(r,n) (uint32_t)( (r)->x##n & 0x00000000FFFFFFFF ) #else -#define PSCI_OP_REG(r) (r)->r0 #define PSCI_RESULT_REG(r) (r)->r0 -#define PSCI_ARGS(r) (r)->r1, (r)->r2 +#define PSCI_ARG(r,n) (r)->r##n +#define PSCI_ARG32(r,n) PSCI_ARG(r,n) #endif -static void do_trap_psci(struct cpu_user_regs *regs) +/* helper function for checking arm mode 32/64 bit */ +static inline int psci_mode_check(struct domain *d, register_t fid) { - arm_psci_fn_t psci_call = NULL; + return !( is_64bit_domain(d)^( (fid & PSCI_0_2_64BIT) >> 30 ) ); +} - if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) ) - { - domain_crash_synchronous(); - return; - } +static void do_trap_psci(struct cpu_user_regs *regs) +{ + register_t fid = PSCI_ARG(regs,0); - psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn; - if ( psci_call == NULL ) + /* preloading in case psci_mode_check fails */ + PSCI_RESULT_REG(regs) = (int32_t)PSCI_INVALID_PARAMETERS; + switch( fid ) { + case PSCI_cpu_off: + { + uint32_t pstate = PSCI_ARG32(regs,1); + PSCI_RESULT_REG(regs) = (int32_t) do_psci_cpu_off(pstate); + } + break; + case PSCI_cpu_on: + { + uint32_t vcpuid = PSCI_ARG32(regs,1); + register_t epoint = PSCI_ARG(regs,2); + PSCI_RESULT_REG(regs) = (int32_t) do_psci_cpu_on(vcpuid, epoint); + } + break; + case PSCI_0_2_FN_PSCI_VERSION: + PSCI_RESULT_REG(regs) = (uint32_t) do_psci_0_2_version(); + break; + case PSCI_0_2_FN_CPU_OFF: + PSCI_RESULT_REG(regs) = (int32_t) do_psci_0_2_cpu_off(); + break; + case PSCI_0_2_FN_MIGRATE_INFO_TYPE: + PSCI_RESULT_REG(regs) = (uint32_t) do_psci_0_2_migrate_info_type(); + break; + case PSCI_0_2_FN_MIGRATE_INFO_UP_CPU: + if ( psci_mode_check(current->domain, fid) ) + PSCI_RESULT_REG(regs) = + (uint32_t) do_psci_0_2_migrate_info_up_cpu(); + else + PSCI_RESULT_REG(regs) = (uint32_t)PSCI_INVALID_PARAMETERS; + break; + case PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU: + if ( psci_mode_check(current->domain, fid) ) + PSCI_RESULT_REG(regs) = do_psci_0_2_migrate_info_up_cpu(); + else + PSCI_RESULT_REG(regs) = PSCI_INVALID_PARAMETERS; + break; + case PSCI_0_2_FN_SYSTEM_OFF: + do_psci_0_2_system_off(); + PSCI_RESULT_REG(regs) = PSCI_INTERNAL_FAILURE; + break; + case PSCI_0_2_FN_SYSTEM_RESET: + do_psci_0_2_system_reset(); + PSCI_RESULT_REG(regs) = PSCI_INTERNAL_FAILURE; + break; + case PSCI_0_2_FN_CPU_ON: + case PSCI_0_2_FN64_CPU_ON: + if ( psci_mode_check(current->domain, fid) ) + { + register_t vcpuid = PSCI_ARG(regs,1); + register_t epoint = PSCI_ARG(regs,2); + register_t cid = PSCI_ARG(regs,3); + PSCI_RESULT_REG(regs) = + (int32_t) do_psci_0_2_cpu_on(vcpuid, epoint, cid); + } + break; + case PSCI_0_2_FN_CPU_SUSPEND: + case PSCI_0_2_FN64_CPU_SUSPEND: + if ( psci_mode_check(current->domain, fid) ) + { + uint32_t pstate = PSCI_ARG32(regs,1); + register_t epoint = PSCI_ARG(regs,2); + register_t cid = PSCI_ARG(regs,3); + PSCI_RESULT_REG(regs) = + (int32_t) do_psci_0_2_cpu_suspend(pstate, epoint, cid); + } + break; + case PSCI_0_2_FN_AFFINITY_INFO: + case PSCI_0_2_FN64_AFFINITY_INFO: + if ( psci_mode_check(current->domain, fid) ) + { + register_t taff = PSCI_ARG(regs,1); + uint32_t laff = PSCI_ARG32(regs,2); + PSCI_RESULT_REG(regs) = + (int32_t) do_psci_0_2_affinity_info(taff, laff); + } + break; + case PSCI_0_2_FN_MIGRATE: + case PSCI_0_2_FN64_MIGRATE: + if ( psci_mode_check(current->domain, fid) ) + { + uint32_t tcpu = PSCI_ARG32(regs,1); + PSCI_RESULT_REG(regs) = (int32_t) do_psci_0_2_migrate(tcpu); + } + break; + default: domain_crash_synchronous(); return; } - - PSCI_RESULT_REG(regs) = psci_call(PSCI_ARGS(regs)); } #ifdef CONFIG_ARM_64 diff --git a/xen/arch/arm/vpsci.c b/xen/arch/arm/vpsci.c index 1ceb8cb..52c3d9b 100644 --- a/xen/arch/arm/vpsci.c +++ b/xen/arch/arm/vpsci.c @@ -17,24 +17,37 @@ #include <asm/current.h> #include <asm/gic.h> #include <asm/psci.h> +#include <public/sched.h> +#include <asm-arm/event.h> -int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) +static int do_common_cpu_on(register_t target_cpu, register_t entry_point, + register_t context_id,int ver) { struct vcpu *v; struct domain *d = current->domain; struct vcpu_guest_context *ctxt; int rc; int is_thumb = entry_point & 1; + register_t vcpuid; + + if( ver == XEN_PSCI_V_0_2 ) + vcpuid = (target_cpu & MPIDR_HWID_MASK); + else + vcpuid = target_cpu; if ( (vcpuid < 0) || (vcpuid >= MAX_VIRT_CPUS) ) - return PSCI_EINVAL; + return PSCI_INVALID_PARAMETERS; if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL ) - return PSCI_EINVAL; + return PSCI_INVALID_PARAMETERS; /* THUMB set is not allowed with 64-bit domain */ if ( is_64bit_domain(d) && is_thumb ) - return PSCI_EINVAL; + return PSCI_INVALID_PARAMETERS; + + if( ( ver == XEN_PSCI_V_0_2 ) && + ( !test_bit(_VPF_down, &v->pause_flags) ) ) + return PSCI_ALREADY_ON; if ( (ctxt = alloc_vcpu_guest_context()) == NULL ) return PSCI_DENIED; @@ -48,10 +61,18 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) ctxt->ttbr1 = 0; ctxt->ttbcr = 0; /* Defined Reset Value */ if ( is_32bit_domain(d) ) + { ctxt->user_regs.cpsr = PSR_GUEST32_INIT; + if( ver == XEN_PSCI_V_0_2 ) + ctxt->user_regs.r0_usr = context_id; + } #ifdef CONFIG_ARM_64 else + { ctxt->user_regs.cpsr = PSR_GUEST64_INIT; + if( ver == XEN_PSCI_V_0_2 ) + ctxt->user_regs.x0 = context_id; + } #endif /* Start the VCPU with THUMB set if it's requested by the kernel */ @@ -75,7 +96,12 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) return PSCI_SUCCESS; } -int do_psci_cpu_off(uint32_t power_state) +int32_t do_psci_cpu_on(uint32_t vcpuid, register_t entry_point) +{ + return do_common_cpu_on(vcpuid,entry_point,0,XEN_PSCI_V_0_1); +} + +int32_t do_psci_cpu_off(uint32_t power_state) { struct vcpu *v = current; if ( !test_and_set_bit(_VPF_down, &v->pause_flags) ) @@ -83,6 +109,109 @@ int do_psci_cpu_off(uint32_t power_state) return PSCI_SUCCESS; } +uint32_t do_psci_0_2_version(void) +{ + return XEN_PSCI_V_0_2; +} + +int32_t do_psci_0_2_cpu_suspend(uint32_t power_state, register_t entry_point, + register_t context_id) +{ + struct vcpu *v = current; + struct domain *d = v->domain; + struct cpu_user_regs *regs = &v->arch.cpu_info->guest_cpu_user_regs; + + if ( is_32bit_domain(d) ) + { + regs->pc32 = entry_point; + regs->r0 = context_id; + } +#ifdef CONFIG_ARM_64 + else + { + regs->pc = entry_point; + regs->x0 = context_id; + } +#endif + vcpu_block_unless_event_pending(v); + return PSCI_SUCCESS; +} + +int32_t do_psci_0_2_cpu_off(void) +{ + return do_psci_cpu_off(0); +} + +int32_t do_psci_0_2_cpu_on(register_t target_cpu, register_t entry_point, + register_t context_id) +{ + return do_common_cpu_on(target_cpu,entry_point,context_id,XEN_PSCI_V_0_2); +} + +static const unsigned long target_affinity_mask[] = { + ( MPIDR_HWID_MASK & AFFINITY_MASK( 0 ) ), + ( MPIDR_HWID_MASK & AFFINITY_MASK( 1 ) ), + ( MPIDR_HWID_MASK & AFFINITY_MASK( 2 ) ) +#ifdef CONFIG_ARM_64 + ,( MPIDR_HWID_MASK & AFFINITY_MASK( 3 ) ) +#endif +}; + +int32_t do_psci_0_2_affinity_info(register_t target_affinity, + uint32_t lowest_affinity_level) +{ + struct domain *d = current->domain; + struct vcpu *v; + uint32_t vcpuid; + unsigned long tmask; + + if ( lowest_affinity_level < ARRAY_SIZE(target_affinity_mask) ) + { + tmask = target_affinity_mask[lowest_affinity_level]; + target_affinity &= tmask; + } + else + return PSCI_INVALID_PARAMETERS; + + for ( vcpuid = 0; vcpuid < d->max_vcpus; vcpuid++ ) + { + v = d->vcpu[vcpuid]; + + if ( ( ( v->arch.vmpidr & tmask ) == target_affinity ) + && ( !test_bit(_VPF_down, &v->pause_flags) ) ) + return PSCI_0_2_AFFINITY_LEVEL_ON; + } + + return PSCI_0_2_AFFINITY_LEVEL_OFF; +} + +int32_t do_psci_0_2_migrate(uint32_t target_cpu) +{ + return PSCI_NOT_SUPPORTED; +} + +uint32_t do_psci_0_2_migrate_info_type(void) +{ + return PSCI_0_2_TOS_MP_OR_NOT_PRESENT; +} + +register_t do_psci_0_2_migrate_info_up_cpu(void) +{ + return PSCI_NOT_SUPPORTED; +} + +void do_psci_0_2_system_off( void ) +{ + struct domain *d = current->domain; + domain_shutdown(d,SHUTDOWN_poweroff); +} + +void do_psci_0_2_system_reset(void) +{ + struct domain *d = current->domain; + domain_shutdown(d,SHUTDOWN_reboot); +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h index 9267c1b..8a237c4 100644 --- a/xen/include/asm-arm/processor.h +++ b/xen/include/asm-arm/processor.h @@ -16,6 +16,9 @@ #define MPIDR_AFF0_MASK (_AC(0xff,U) << MPIDR_AFF0_SHIFT) #define MPIDR_HWID_MASK _AC(0xffffff,U) #define MPIDR_INVALID (~MPIDR_HWID_MASK) +#define MPIDR_LEVEL_BITS (8) +#define AFFINITY_MASK(level) ~((_AC(0x1,U) << ((level) * MPIDR_LEVEL_BITS)) - 1) + /* TTBCR Translation Table Base Control Register */ #define TTBCR_EAE _AC(0x80000000,U) diff --git a/xen/include/asm-arm/psci.h b/xen/include/asm-arm/psci.h index 189964b..b29d99f 100644 --- a/xen/include/asm-arm/psci.h +++ b/xen/include/asm-arm/psci.h @@ -1,10 +1,16 @@ #ifndef __ASM_PSCI_H__ #define __ASM_PSCI_H__ -#define PSCI_SUCCESS 0 -#define PSCI_ENOSYS -1 -#define PSCI_EINVAL -2 -#define PSCI_DENIED -3 +/* PSCI return values (inclusive of all PSCI versions) */ +#define PSCI_SUCCESS 0 +#define PSCI_NOT_SUPPORTED -1 +#define PSCI_INVALID_PARAMETERS -2 +#define PSCI_DENIED -3 +#define PSCI_ALREADY_ON -4 +#define PSCI_ON_PENDING -5 +#define PSCI_INTERNAL_FAILURE -6 +#define PSCI_NOT_PRESENT -7 +#define PSCI_DISABLED -8 /* availability of PSCI on the host for SMP bringup */ extern bool_t psci_available; @@ -13,10 +19,64 @@ int psci_init(void); int call_psci_cpu_on(int cpu); /* functions to handle guest PSCI requests */ -int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point); -int do_psci_cpu_off(uint32_t power_state); -int do_psci_cpu_suspend(uint32_t power_state, register_t entry_point); -int do_psci_migrate(uint32_t vcpuid); +int32_t do_psci_cpu_on(uint32_t vcpuid, register_t entry_point); +int32_t do_psci_cpu_off(uint32_t power_state); +int32_t do_psci_cpu_suspend(uint32_t power_state, register_t entry_point); +int32_t do_psci_migrate(uint32_t vcpuid); + +/* PSCI 0.2 functions to handle guest PSCI requests */ +uint32_t do_psci_0_2_version(void); +int32_t do_psci_0_2_cpu_suspend(uint32_t power_state, register_t entry_point, + register_t context_id); +int32_t do_psci_0_2_cpu_off(void); +int32_t do_psci_0_2_cpu_on(register_t target_cpu, register_t entry_point, + register_t context_id); +int32_t do_psci_0_2_affinity_info(register_t target_affinity, + uint32_t lowest_affinity_level); +int32_t do_psci_0_2_migrate(uint32_t target_cpu); +uint32_t do_psci_0_2_migrate_info_type(void); +register_t do_psci_0_2_migrate_info_up_cpu(void); +void do_psci_0_2_system_off(void); +void do_psci_0_2_system_reset(void); + +/* PSCI version */ +#define XEN_PSCI_V_0_1 1 +#define XEN_PSCI_V_0_2 2 + +/* PSCI v0.2 interface */ +#define PSCI_0_2_FN_BASE 0x84000000 +#define PSCI_0_2_FN(n) (PSCI_0_2_FN_BASE + (n)) +#define PSCI_0_2_64BIT 0x40000000 +#define PSCI_0_2_FN64_BASE \ + (PSCI_0_2_FN_BASE + PSCI_0_2_64BIT) +#define PSCI_0_2_FN64(n) (PSCI_0_2_FN64_BASE + (n)) + +#define PSCI_0_2_FN_PSCI_VERSION PSCI_0_2_FN(0) +#define PSCI_0_2_FN_CPU_SUSPEND PSCI_0_2_FN(1) +#define PSCI_0_2_FN_CPU_OFF PSCI_0_2_FN(2) +#define PSCI_0_2_FN_CPU_ON PSCI_0_2_FN(3) +#define PSCI_0_2_FN_AFFINITY_INFO PSCI_0_2_FN(4) +#define PSCI_0_2_FN_MIGRATE PSCI_0_2_FN(5) +#define PSCI_0_2_FN_MIGRATE_INFO_TYPE PSCI_0_2_FN(6) +#define PSCI_0_2_FN_MIGRATE_INFO_UP_CPU PSCI_0_2_FN(7) +#define PSCI_0_2_FN_SYSTEM_OFF PSCI_0_2_FN(8) +#define PSCI_0_2_FN_SYSTEM_RESET PSCI_0_2_FN(9) + +#define PSCI_0_2_FN64_CPU_SUSPEND PSCI_0_2_FN64(1) +#define PSCI_0_2_FN64_CPU_ON PSCI_0_2_FN64(3) +#define PSCI_0_2_FN64_AFFINITY_INFO PSCI_0_2_FN64(4) +#define PSCI_0_2_FN64_MIGRATE PSCI_0_2_FN64(5) +#define PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU PSCI_0_2_FN64(7) + +/* PSCI v0.2 affinity level state returned by AFFINITY_INFO */ +#define PSCI_0_2_AFFINITY_LEVEL_ON 0 +#define PSCI_0_2_AFFINITY_LEVEL_OFF 1 +#define PSCI_0_2_AFFINITY_LEVEL_ON_PENDING 2 + +/* PSCI v0.2 multicore support in Trusted OS returned by MIGRATE_INFO_TYPE */ +#define PSCI_0_2_TOS_UP_MIGRATE_CAPABLE 0 +#define PSCI_0_2_TOS_UP_NOT_MIGRATE_CAPABLE 1 +#define PSCI_0_2_TOS_MP_OR_NOT_PRESENT 2 #endif /* __ASM_PSCI_H__ */
Arm based virtual machines dom0/guest will request power related functionality from xen through PSCI interface. This patch implements version 0.2 of PSCI standard specified by arm for 64bit and 32 bit arm machines. - removed arm_psci_fn_t - implemented psci_cpu_on with additional error conditions - added switch-case in do_trap_psci function - added PSCI v0.2 macros in psci.h - removed tables for PSCI v0.1 and v0.2 - implemented affinity_info - moved do_common_cpu up in vpsci.c removed declaration - removed PSCI_ARGS macro - added new macro for 32 bit arguments - added new function psci_mode_check Signed-off-by: Parth Dixit <parth.dixit@linaro.org> --- Changelog v5 - removed PSCI_ARGS macro - preload error value for calls that do not return - added new macro for 32 bit arguments - casting return calls to appropriate types - added new function psci_mode_check - added error checks for PSCI v0.2 function id's Changelog v4 - added target_affinity_mask[] to local variable - removed AFF3 masking for 64 bit cpu - added do_common_cpu_on for PSCI v0.1 and v0.2 - moved cpu_on decision based on entry point - removed vpsci_ver introduced in v3 - removed psci tables - added new macro for selecting arguments - do_trap_psci is now switch case based instead of tables Changelog v3 - moved wfi helper to seperate patch - replaced new wfi function in traps.c - removed defining power_state variable using value directly - added new line between declaration - merged PSCI v0.1 and v0.2 cpu_on function to avoid duplication - removed spurious change - renamed PSCI return values to reflect v0.2 moved them to top - removed PSCI v0.1 version declaration for now, will introduce it when needed - removed hard tabs in the code lifted from linux - removed PSCI_0_1_MAX - did not retained copyright header from linux as most of functions are removed - added two function tables for PSCI v0.1 and PSCI v0.2 - added compatibility string to libxl_arm to expose new functionality - refactored affinity_info code - added table for masking function - removed definitions of unused PSCI v0.2 functions - removed function id's of unused PSCI v0.2 functions - renamed constant PSCI_0_2_TOS_MP ( multicore aware) as per spec tools/libxl/libxl_arm.c | 2 +- xen/arch/arm/domain_build.c | 5 +- xen/arch/arm/traps.c | 131 +++++++++++++++++++++++++++---------- xen/arch/arm/vpsci.c | 139 ++++++++++++++++++++++++++++++++++++++-- xen/include/asm-arm/processor.h | 3 + xen/include/asm-arm/psci.h | 76 +++++++++++++++++++--- 6 files changed, 308 insertions(+), 48 deletions(-)