mbox series

[RFC,0/6] Improve VM DVFS and task placement behavior

Message ID 20230330224348.1006691-1-davidai@google.com
Headers show
Series Improve VM DVFS and task placement behavior | expand

Message

David Dai March 30, 2023, 10:43 p.m. UTC
Hi,

This patch series is a continuation of the talk Saravana gave at LPC 2022
titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
of the talk is that workloads running in a guest VM get terrible task
placement and DVFS behavior when compared to running the same workload in
the host. Effectively, no EAS for threads inside VMs. This would make power
and performance terrible just by running the workload in a VM even if we
assume there is zero virtualization overhead.

We have been iterating over different options for communicating between
guest and host, ways of applying the information coming from the
guest/host, etc to figure out the best performance and power improvements
we could get.

The patch series in its current state is NOT meant for landing in the
upstream kernel. We are sending this patch series to share the current
progress and data we have so far. The patch series is meant to be easy to
cherry-pick and test on various devices to see what performance and power
benefits this might give for others.

With this series, a workload running in a VM gets the same task placement
and DVFS treatment as it would when running in the host.

As expected, we see significant performance improvement and better
performance/power ratio. If anyone else wants to try this out for your VM
workloads and report findings, that'd be very much appreciated.

The idea is to improve VM CPUfreq/sched behavior by:
- Having guest kernel to do accurate load tracking by taking host CPU
  arch/type and frequency into account.
- Sharing vCPU run queue utilization information with the host so that the
  host can do proper frequency scaling and task placement on the host side.

Results:
========

As of right now, the best results have been with using hypercalls (see more
below first) to communicate between host and guest and treating the vCPU
run queue util similar to util_est on the host side vCPU thread. So that's
what this patch series does.

Let's look at the results for this series first and then look at the other
options we are trying/tried out:

Use cases running Android inside a VM on a Chromebook:
======================================================

PCMark (Emulates real world usecases)
Higher is better
+-------------------+----------+------------+--------+
| Test Case (score) | Baseline | Util_guest | %delta |
+-------------------+----------+------------+--------+
| Weighted Total    |     6136 |       7274 |   +19% |
+-------------------+----------+------------+--------+
| Web Browsing      |     5558 |       6273 |   +13% |
+-------------------+----------+------------+--------+
| Video Editing     |     4921 |       5221 |    +6% |
+-------------------+----------+------------+--------+
| Writing           |     6864 |       8825 |   +29% |
+-------------------+----------+------------+--------+
| Photo Editing     |     7983 |      11593 |   +45% |
+-------------------+----------+------------+--------+
| Data Manipulation |     5814 |       6081 |    +5% |
+-------------------+----------+------------+--------+

PCMark Performance/mAh
Higher is better
+-----------+----------+------------+--------+
|           | Baseline | Util_guest | %delta |
+-----------+----------+------------+--------+
| Score/mAh |       79 |         88 |   +11% |
+-----------+----------+------------+--------+

Roblox
Higher is better
+-----+----------+------------+--------+
|     | Baseline | Util_guest | %delta |
+-----+----------+------------+--------+
| FPS |    18.25 |      28.66 |   +57% |
+-----+----------+------------+--------+

Roblox FPS/mAh
Higher is better
+-----+----------+------------+--------+
|     | Baseline | Util_guest | %delta |
+-----+----------+------------+--------+
| FPS |     0.15 |       0.19 |   +26% |
+-----+----------+------------+--------+

Use cases running a minimal system inside a VM on a Pixel 6:
============================================================

FIO
Higher is better
+----------------------+----------+------------+--------+
| Test Case (avg MB/s) | Baseline | Util_guest | %delta |
+----------------------+----------+------------+--------+
| Seq Write            |     9.27 |       12.6 |   +36% |
+----------------------+----------+------------+--------+
| Rand Write           |     9.34 |       11.9 |   +27% |
+----------------------+----------+------------+--------+
| Seq Read             |      106 |        124 |   +17% |
+----------------------+----------+------------+--------+
| Rand Read            |     33.6 |         35 |    +4% |
+----------------------+----------+------------+--------+

CPU-based ML Inference Benchmark
Lower is better
+-------------------------+----------+------------+--------+
| Test Case (ms)          | Baseline | Util_guest | %delta |
+-------------------------+----------+------------+--------+
| Cached Sample Inference |     2.57 |       1.75 |   -32% |
+-------------------------+----------+------------+--------+
| Small Sample Inference  |      6.8 |       5.57 |   -18% |
+-------------------------+----------+------------+--------+
| Large Sample Inference  |     31.2 |      26.58 |   -15% |
+-------------------------+----------+------------+--------+

These patches expect the host to:
- Affine vCPUs to specific clusters.
- Set vCPU capacity to match the host CPU they are running on.

To make this easy to do/try out, we have put up patches[4][5] to do this on
CrosVM. Once you pick up those patches, you can use options
"--host-cpu-topology" and "--virt-cpufreq" to achieve the above.

The patch series can be broken into:

Patch 1: Add util_guest as an additional PELT signal for host vCPU threads
Patch 2: Hypercall for guest to get current pCPU's frequency
Patch 3: Send vCPU run queue util to host and apply as util_guest
Patch 4: Query pCPU freq table from guest (we'll move this to DT in the
	 future)
Patch 5: Virtual cpufreq driver that uses the hypercalls to send util to
	 host and implement frequency invariance in the guest.

Alternative we have implemented and profiled:
=============================================

util_guest vs uclamp_min
========================

One suggestion at LPC was to use uclamp_min to apply the util info coming
from the guest. As we suspected, it doesn't perform as well because
uclamp_min is not additive, whereas the actual workload on the host CPU due
to the vCPU is additive to the existing workloads on the host. Uclamp_min
also has the undesirable side-effect of threads forked from the vCPU thread
inheriting whatever uclamp_min value the vCPU thread had and then getting
stuck with that uclamp_min value.

Below are some additional benchmark results comparing the uclamp_min
prototype (listed as Uclamp) using the same test environment as before
(including hypercalls).

As before, %delta is always comparing to baseline.

PCMark
Higher is better
+-------------------+----------+------------+--------+--------+--------+
| Test Case (score) | Baseline | Util_guest | %delta | Uclamp | %delta |
+-------------------+----------+------------+--------+--------+--------+
| Weighted Total    |     6136 |       7274 |   +19% |   6848 |   +12% |
+-------------------+----------+------------+--------+--------+--------+
| Web Browsing      |     5558 |       6273 |   +13% |   6050 |    +9% |
+-------------------+----------+------------+--------+--------+--------+
| Video Editing     |     4921 |       5221 |    +6% |   5091 |    +3% |
+-------------------+----------+------------+--------+--------+--------+
| Writing           |     6864 |       8825 |   +29% |   8523 |   +24% |
+-------------------+----------+------------+--------+--------+--------+
| Photo Editing     |     7983 |      11593 |   +45% |   9865 |   +24% |
+-------------------+----------+------------+--------+--------+--------+
| Data Manipulation |     5814 |       6081 |    +5% |   5836 |     0% |
+-------------------+----------+------------+--------+--------+--------+

PCMark Performance/mAh
Higher is better
+-----------+----------+------------+--------+--------+--------+
|           | Baseline | Util_guest | %delta | Uclamp | %delta |
+-----------+----------+------------+--------+--------+--------+
| Score/mAh |       79 |         88 |   +11% |     83 |    +7% |
+-----------+----------+------------+--------+--------+--------+

Hypercalls vs MMIO:
===================
We realize that hypercalls are not the recommended choice for this and we
have no attachment to any communication method as long as it gives good
results.

We started off with hypercalls to see what is the best we could achieve if
we didn't have to context switch into host side userspace.

To see the impact of switching from hypercalls to MMIO, we kept util_guest
and only switched from hypercall to MMIO. So in the results below:
- Hypercall = hypercall + util_guest
- MMIO = MMIO + util_guest

As before, %delta is always comparing to baseline.

PCMark
Higher is better
+-------------------+----------+------------+--------+-------+--------+
| Test Case (score) | Baseline |  Hypercall | %delta |  MMIO | %delta |
+-------------------+----------+------------+--------+-------+--------+
| Weighted Total    |     6136 |       7274 |   +19% |  6867 |   +12% |
+-------------------+----------+------------+--------+-------+--------+
| Web Browsing      |     5558 |       6273 |   +13% |  6035 |    +9% |
+-------------------+----------+------------+--------+-------+--------+
| Video Editing     |     4921 |       5221 |    +6% |  5167 |    +5% |
+-------------------+----------+------------+--------+-------+--------+
| Writing           |     6864 |       8825 |   +29% |  8529 |   +24% |
+-------------------+----------+------------+--------+-------+--------+
| Photo Editing     |     7983 |      11593 |   +45% | 10812 |   +35% |
+-------------------+----------+------------+--------+-------+--------+
| Data Manipulation |     5814 |       6081 |    +5% |  5327 |    -8% |
+-------------------+----------+------------+--------+-------+--------+

PCMark Performance/mAh
Higher is better
+-----------+----------+-----------+--------+------+--------+
|           | Baseline | Hypercall | %delta | MMIO | %delta |
+-----------+----------+-----------+--------+------+--------+
| Score/mAh |       79 |        88 |   +11% |   83 |    +7% |
+-----------+----------+-----------+--------+------+--------+

Roblox
Higher is better
+-----+----------+------------+--------+-------+--------+
|     | Baseline |  Hypercall | %delta |  MMIO | %delta |
+-----+----------+------------+--------+-------+--------+
| FPS |    18.25 |      28.66 |   +57% | 24.06 |   +32% |
+-----+----------+------------+--------+-------+--------+

Roblox Frames/mAh
Higher is better
+------------+----------+------------+--------+--------+--------+
|            | Baseline |  Hypercall | %delta |   MMIO | %delta |
+------------+----------+------------+--------+--------+--------+
| Frames/mAh |    91.25 |     114.64 |   +26% | 103.11 |   +13% |
+------------+----------+------------+--------+--------+--------+

Next steps:
===========
We are continuing to look into communication mechanisms other than
hypercalls that are just as/more efficient and avoid switching into the VMM
userspace. Any inputs in this regard are greatly appreciated.

Thanks,
David & Saravana

[1] - https://lpc.events/event/16/contributions/1195/
[2] - https://lpc.events/event/16/contributions/1195/attachments/970/1893/LPC%202022%20-%20VM%20DVFS.pdf
[3] - https://www.youtube.com/watch?v=hIg_5bg6opU
[4] - https://chromium-review.googlesource.com/c/crosvm/crosvm/+/4208668
[5] - https://chromium-review.googlesource.com/c/crosvm/crosvm/+/4288027

David Dai (6):
  sched/fair: Add util_guest for tasks
  kvm: arm64: Add support for get_cur_cpufreq service
  kvm: arm64: Add support for util_hint service
  kvm: arm64: Add support for get_freqtbl service
  dt-bindings: cpufreq: add bindings for virtual kvm cpufreq
  cpufreq: add kvm-cpufreq driver

 .../bindings/cpufreq/cpufreq-virtual-kvm.yaml |  39 +++
 Documentation/virt/kvm/api.rst                |  28 ++
 .../virt/kvm/arm/get_cur_cpufreq.rst          |  21 ++
 Documentation/virt/kvm/arm/get_freqtbl.rst    |  23 ++
 Documentation/virt/kvm/arm/index.rst          |   3 +
 Documentation/virt/kvm/arm/util_hint.rst      |  22 ++
 arch/arm64/include/uapi/asm/kvm.h             |   3 +
 arch/arm64/kvm/arm.c                          |   3 +
 arch/arm64/kvm/hypercalls.c                   |  60 +++++
 drivers/cpufreq/Kconfig                       |  13 +
 drivers/cpufreq/Makefile                      |   1 +
 drivers/cpufreq/kvm-cpufreq.c                 | 245 ++++++++++++++++++
 include/linux/arm-smccc.h                     |  21 ++
 include/linux/sched.h                         |  12 +
 include/uapi/linux/kvm.h                      |   3 +
 kernel/sched/core.c                           |  24 +-
 kernel/sched/fair.c                           |  15 +-
 tools/arch/arm64/include/uapi/asm/kvm.h       |   3 +
 18 files changed, 536 insertions(+), 3 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/cpufreq/cpufreq-virtual-kvm.yaml
 create mode 100644 Documentation/virt/kvm/arm/get_cur_cpufreq.rst
 create mode 100644 Documentation/virt/kvm/arm/get_freqtbl.rst
 create mode 100644 Documentation/virt/kvm/arm/util_hint.rst
 create mode 100644 drivers/cpufreq/kvm-cpufreq.c

Comments

Saravana Kannan March 30, 2023, 11:36 p.m. UTC | #1
On Thu, Mar 30, 2023 at 4:20 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
>
> [...]
>
> > David Dai (6):
> >   sched/fair: Add util_guest for tasks
> >   kvm: arm64: Add support for get_cur_cpufreq service
> >   kvm: arm64: Add support for util_hint service
> >   kvm: arm64: Add support for get_freqtbl service
> >   dt-bindings: cpufreq: add bindings for virtual kvm cpufreq
> >   cpufreq: add kvm-cpufreq driver
>
> I only received patches 2-4 in my inbox (same goes for the mailing lists
> AFAICT). Mind sending the rest? :)

Oliver,

Sorry about that. Actually even I'm not cc'ed in the cover letter :)

Is it okay if we fix this when we send the next version? Mainly to
avoid some people responding to this vs other responding to a new
series (where the patches are the same).

We used a script for --to-cmd and --cc-cmd but looks like it needs
some more fixes.

Here is the full series to anyone who's wondering where the rest of
the patches are:
https://lore.kernel.org/lkml/20230330224348.1006691-1-davidai@google.com/T/#t

Thanks,
Saravana
Matthew Wilcox March 31, 2023, 12:49 a.m. UTC | #2
On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> Hi,
> 
> This patch series is a continuation of the talk Saravana gave at LPC 2022
> titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
> of the talk is that workloads running in a guest VM get terrible task
> placement and DVFS behavior when compared to running the same workload in

DVFS?  Some new filesystem, perhaps?

> the host. Effectively, no EAS for threads inside VMs. This would make power

EAS?

Two unfamiliar and undefined acronyms in your opening paragraph.
You're not making me want to read the rest of your opus.
Oliver Upton April 4, 2023, 7:43 p.m. UTC | #3
Folks,

On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:

<snip>

> PCMark
> Higher is better
> +-------------------+----------+------------+--------+-------+--------+
> | Test Case (score) | Baseline |  Hypercall | %delta |  MMIO | %delta |
> +-------------------+----------+------------+--------+-------+--------+
> | Weighted Total    |     6136 |       7274 |   +19% |  6867 |   +12% |
> +-------------------+----------+------------+--------+-------+--------+
> | Web Browsing      |     5558 |       6273 |   +13% |  6035 |    +9% |
> +-------------------+----------+------------+--------+-------+--------+
> | Video Editing     |     4921 |       5221 |    +6% |  5167 |    +5% |
> +-------------------+----------+------------+--------+-------+--------+
> | Writing           |     6864 |       8825 |   +29% |  8529 |   +24% |
> +-------------------+----------+------------+--------+-------+--------+
> | Photo Editing     |     7983 |      11593 |   +45% | 10812 |   +35% |
> +-------------------+----------+------------+--------+-------+--------+
> | Data Manipulation |     5814 |       6081 |    +5% |  5327 |    -8% |
> +-------------------+----------+------------+--------+-------+--------+
> 
> PCMark Performance/mAh
> Higher is better
> +-----------+----------+-----------+--------+------+--------+
> |           | Baseline | Hypercall | %delta | MMIO | %delta |
> +-----------+----------+-----------+--------+------+--------+
> | Score/mAh |       79 |        88 |   +11% |   83 |    +7% |
> +-----------+----------+-----------+--------+------+--------+
> 
> Roblox
> Higher is better
> +-----+----------+------------+--------+-------+--------+
> |     | Baseline |  Hypercall | %delta |  MMIO | %delta |
> +-----+----------+------------+--------+-------+--------+
> | FPS |    18.25 |      28.66 |   +57% | 24.06 |   +32% |
> +-----+----------+------------+--------+-------+--------+
> 
> Roblox Frames/mAh
> Higher is better
> +------------+----------+------------+--------+--------+--------+
> |            | Baseline |  Hypercall | %delta |   MMIO | %delta |
> +------------+----------+------------+--------+--------+--------+
> | Frames/mAh |    91.25 |     114.64 |   +26% | 103.11 |   +13% |
> +------------+----------+------------+--------+--------+--------+

</snip>

> Next steps:
> ===========
> We are continuing to look into communication mechanisms other than
> hypercalls that are just as/more efficient and avoid switching into the VMM
> userspace. Any inputs in this regard are greatly appreciated.

We're highly unlikely to entertain such an interface in KVM.

The entire feature is dependent on pinning vCPUs to physical cores, for which
userspace is in the driver's seat. That is a well established and documented
policy which can be seen in the way we handle heterogeneous systems and
vPMU.

Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
that I would not expect to benefit the typical user of KVM.

Based on the data above, it would appear that the userspace implementation is
in the same neighborhood as a KVM-based implementation, which only further
weakens the case for moving this into the kernel.

I certainly can appreciate the motivation for the series, but this feature
should be in userspace as some form of a virtual device.
Saravana Kannan April 5, 2023, 9 p.m. UTC | #4
On Tue, Apr 4, 2023 at 1:49 PM Marc Zyngier <maz@kernel.org> wrote:
>
> On Tue, 04 Apr 2023 20:43:40 +0100,
> Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > Folks,
> >
> > On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> >
> > <snip>
> >
> > > PCMark
> > > Higher is better
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Test Case (score) | Baseline |  Hypercall | %delta |  MMIO | %delta |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Weighted Total    |     6136 |       7274 |   +19% |  6867 |   +12% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Web Browsing      |     5558 |       6273 |   +13% |  6035 |    +9% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Video Editing     |     4921 |       5221 |    +6% |  5167 |    +5% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Writing           |     6864 |       8825 |   +29% |  8529 |   +24% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Photo Editing     |     7983 |      11593 |   +45% | 10812 |   +35% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Data Manipulation |     5814 |       6081 |    +5% |  5327 |    -8% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > >
> > > PCMark Performance/mAh
> > > Higher is better
> > > +-----------+----------+-----------+--------+------+--------+
> > > |           | Baseline | Hypercall | %delta | MMIO | %delta |
> > > +-----------+----------+-----------+--------+------+--------+
> > > | Score/mAh |       79 |        88 |   +11% |   83 |    +7% |
> > > +-----------+----------+-----------+--------+------+--------+
> > >
> > > Roblox
> > > Higher is better
> > > +-----+----------+------------+--------+-------+--------+
> > > |     | Baseline |  Hypercall | %delta |  MMIO | %delta |
> > > +-----+----------+------------+--------+-------+--------+
> > > | FPS |    18.25 |      28.66 |   +57% | 24.06 |   +32% |
> > > +-----+----------+------------+--------+-------+--------+
> > >
> > > Roblox Frames/mAh
> > > Higher is better
> > > +------------+----------+------------+--------+--------+--------+
> > > |            | Baseline |  Hypercall | %delta |   MMIO | %delta |
> > > +------------+----------+------------+--------+--------+--------+
> > > | Frames/mAh |    91.25 |     114.64 |   +26% | 103.11 |   +13% |
> > > +------------+----------+------------+--------+--------+--------+
> >
> > </snip>
> >
> > > Next steps:
> > > ===========
> > > We are continuing to look into communication mechanisms other than
> > > hypercalls that are just as/more efficient and avoid switching into the VMM
> > > userspace. Any inputs in this regard are greatly appreciated.

Hi Oliver and Marc,

Replying to both of you in this one email.

> >
> > We're highly unlikely to entertain such an interface in KVM.
> >
> > The entire feature is dependent on pinning vCPUs to physical cores, for which
> > userspace is in the driver's seat. That is a well established and documented
> > policy which can be seen in the way we handle heterogeneous systems and
> > vPMU.
> >
> > Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
> > that I would not expect to benefit the typical user of KVM.
> >
> > Based on the data above, it would appear that the userspace implementation is
> > in the same neighborhood as a KVM-based implementation, which only further
> > weakens the case for moving this into the kernel.

Oliver,

Sorry if the tables/data aren't presented in an intuitive way, but
MMIO vs hypercall is definitely not in the same neighborhood. The
hypercall method often gives close to 2x the improvement that the MMIO
method gives. For example:

- Roblox FPS: MMIO improves it by 32% vs hypercall improves it by 57%.
- Frames/mAh: MMIO improves it by 13% vs hypercall improves it by 26%.
- PC Mark Data manipulation: MMIO makes it worse by 8% vs hypercall
improves it by 5%

Hypercall does better for other cases too, just not as good. For example,
- PC Mark Photo editing: Going from MMIO to hypercall gives a 10% improvement.

These are all pretty non-trivial, at least in the mobile world. Heck,
whole teams would spend months for 2% improvement in battery :)

> >
> > I certainly can appreciate the motivation for the series, but this feature
> > should be in userspace as some form of a virtual device.
>
> +1 on all of the above.

Marc and Oliver,

We are not tied to hypercalls. We want to do the right thing here, but
MMIO going all the way to userspace definitely doesn't cut it as is.
This is where we need some guidance. See more below.

> The one thing I'd like to understand that the comment seems to imply
> that there is a significant difference in overhead between a hypercall
> and an MMIO. In my experience, both are pretty similar in cost for a
> handling location (both in userspace or both in the kernel).

I think the main difference really is that in our hypercall vs MMIO
comparison the hypercall is handled in the kernel vs MMIO goes all the
way to userspace. I agree with you that the difference probably won't
be significant if both of them go to the same "depth" in the privilege
levels.

> MMIO
> handling is a tiny bit more expensive due to a guaranteed TLB miss
> followed by a walk of the in-kernel device ranges, but that's all. It
> should hardly register.
>
> And if you really want some super-low latency, low overhead
> signalling, maybe an exception is the wrong tool for the job. Shared
> memory communication could be more appropriate.

Yeah, that's one of our next steps. Ideally, we want to use shared
memory for the host to guest information flow. It's a 32-bit value
representing the current frequency that the host can update whenever
the host CPU frequency changes and the guest can read whenever it
needs it.

For guest to host information flow, we'll need a kick from guest to
host because we need to take action on the host side when threads
migrate between vCPUs and cause a significant change in vCPU util.
Again it can be just a shared memory and some kick. This is what we
are currently trying to figure out how to do.

If there are APIs to do this, can you point us to those please? We'd
also want the shared memory to be accessible by the VMM (so, shared
between guest kernel, host kernel and VMM).

Are the above next steps sane? Or is that a no-go? The main thing we
want to cut out is the need for having to switch to userspace for
every single interaction because, as is, it leaves a lot on the table.

Also, thanks for all the feedback. Glad to receive it.

-Saravana
Saravana Kannan April 5, 2023, 9:08 p.m. UTC | #5
On Wed, Apr 5, 2023 at 1:06 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> > Hi,
> >
> > This patch series is a continuation of the talk Saravana gave at LPC 2022
> > titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
> > of the talk is that workloads running in a guest VM get terrible task
> > placement and DVFS behavior when compared to running the same workload in
> > the host. Effectively, no EAS for threads inside VMs. This would make power
> > and performance terrible just by running the workload in a VM even if we
> > assume there is zero virtualization overhead.
> >
> > We have been iterating over different options for communicating between
> > guest and host, ways of applying the information coming from the
> > guest/host, etc to figure out the best performance and power improvements
> > we could get.
> >
> > The patch series in its current state is NOT meant for landing in the
> > upstream kernel. We are sending this patch series to share the current
> > progress and data we have so far. The patch series is meant to be easy to
> > cherry-pick and test on various devices to see what performance and power
> > benefits this might give for others.
> >
> > With this series, a workload running in a VM gets the same task placement
> > and DVFS treatment as it would when running in the host.
> >
> > As expected, we see significant performance improvement and better
> > performance/power ratio. If anyone else wants to try this out for your VM
> > workloads and report findings, that'd be very much appreciated.
> >
> > The idea is to improve VM CPUfreq/sched behavior by:
> > - Having guest kernel to do accurate load tracking by taking host CPU
> >   arch/type and frequency into account.
> > - Sharing vCPU run queue utilization information with the host so that the
> >   host can do proper frequency scaling and task placement on the host side.
>
> So, not having actually been send many of the patches I've no idea what
> you've done... Please, eradicate this ridiculous idea of sending random
> people a random subset of a patch series. Either send all of it or none,
> this is a bloody nuisance.

Sorry, that was our intention, but had a scripting error. It's been fixed.

I have a script to use with git send-email's --to-cmd and --cc-cmd
option. It uses get_maintainers.pl to figure out who to email, but it
gets trickier for a patch series that spans maintainer trees.

v2 and later will have everyone get all the patches.

> Having said that; my biggest worry is that you're making scheduler
> internals into an ABI. I would hate for this paravirt interface to tie
> us down.

The only 2 pieces of information shared between host/guest are:

1. Host CPU frequency -- this isn't really scheduler internals and
will map nicely to a virtual cpufreq driver.

2. A vCPU util value between 0 - 1024 where 1024 corresponds to the
highest performance point across all CPUs (taking freq, arch, etc into
consideration). Yes, this currently matches how the run queue util is
tracked, but we can document the interface as "percentage of max
performance capability", but representing it as 0 - 1024 instead of
0-100. That way, even if the scheduler changes how it tracks util in
the future, we can still keep this interface between guest/host and
map it appropriately on the host end.

In either case, we could even have a Windows guest where they might
track vCPU utilization differently and still have this work with the
Linux host with this interface.

Does that sound reasonable to you?

Another option is to convert (2) into a "CPU frequency" request (but
without latching it to values in the CPUfreq table) but it'll add some
unnecessary math (with division) on the guest and host end. But I'd
rather keep it as 0-1024 unless you really want this 2nd option.

-Saravana
David Dai April 6, 2023, 9:39 p.m. UTC | #6
On Thu, Apr 6, 2023 at 5:52 AM Quentin Perret <qperret@google.com> wrote:
>
> On Wednesday 05 Apr 2023 at 14:07:18 (-0700), Saravana Kannan wrote:
> > On Wed, Apr 5, 2023 at 12:48 AM 'Quentin Perret' via kernel-team
> > > And I concur with all the above as well. Putting this in the kernel is
> > > not an obvious fit at all as that requires a number of assumptions about
> > > the VMM.
> > >
> > > As Oliver pointed out, the guest topology, and how it maps to the host
> > > topology (vcpu pinning etc) is very much a VMM policy decision and will
> > > be particularly important to handle guest frequency requests correctly.
> > >
> > > In addition to that, the VMM's software architecture may have an impact.
> > > Crosvm for example does device emulation in separate processes for
> > > security reasons, so it is likely that adjusting the scheduling
> > > parameters ('util_guest', uclamp, or else) only for the vCPU thread that
> > > issues frequency requests will be sub-optimal for performance, we may
> > > want to adjust those parameters for all the tasks that are on the
> > > critical path.
> > >
> > > And at an even higher level, assuming in the kernel a certain mapping of
> > > vCPU threads to host threads feels kinda wrong, this too is a host
> > > userspace policy decision I believe. Not that anybody in their right
> > > mind would want to do this, but I _think_ it would technically be
> > > feasible to serialize the execution of multiple vCPUs on the same host
> > > thread, at which point the util_guest thingy becomes entirely bogus. (I
> > > obviously don't want to conflate this use-case, it's just an example
> > > that shows the proposed abstraction in the series is not a perfect fit
> > > for the KVM userspace delegation model.)
> >
> > See my reply to Oliver and Marc. To me it looks like we are converging
> > towards having shared memory between guest, host kernel and VMM and
> > that should address all our concerns.
>
> Hmm, that is not at all my understanding of what has been the most
> important part of the feedback so far: this whole thing belongs to
> userspace.
>
> > The guest will see a MMIO device, writing to it will trigger the host
> > kernel to do the basic "set util_guest/uclamp for the vCPU thread that
> > corresponds to the vCPU" and then the VMM can do more on top as/if
> > needed (because it has access to the shared memory too). Does that
> > make sense?
>
> Not really no. I've given examples of why this doesn't make sense for
> the kernel to do this, which still seems to be the case with what you're
> suggesting here.
>
> > Even in the extreme example, the stuff the kernel would do would still
> > be helpful, but not sufficient. You can aggregate the
> > util_guest/uclamp and do whatever from the VMM.
> > Technically in the extreme example, you don't need any of this. The
> > normal util tracking of the vCPU thread on the host side would be
> > sufficient.
> >
> > Actually any time we have only 1 vCPU host thread per VM, we shouldn't
> > be using anything in this patch series and not instantiate the guest
> > device at all.
>
> > > So +1 from me to move this as a virtual device of some kind. And if the
> > > extra cost of exiting all the way back to userspace is prohibitive (is
> > > it btw?),
> >
> > I think the "13% increase in battery consumption for games" makes it
> > pretty clear that going to userspace is prohibitive. And that's just
> > one example.
>

Hi Quentin,

Appreciate the feedback,

> I beg to differ. We need to understand where these 13% come from in more
> details. Is it really the actual cost of the userspace exit? Or is it
> just that from userspace the only knob you can play with is uclamp and
> that didn't reach the expected level of performance?

To clarify, the MMIO numbers shown in the cover letter were collected
with updating vCPU task's util_guest as opposed to uclamp_min. In that
configuration, userspace(VMM) handles the mmio_exit from the guest and
makes an ioctl on the host kernel to update util_guest for the vCPU
task.

>
> If that is the userspace exit, then we can work to optimize that -- it's
> a fairly common problem in the virt world, nothing special here.
>

Ok, we're open to suggestions on how to better optimize here.

> And if the issue is the lack of expressiveness in uclamp, then that too
> is something we should work on, but clearly giving vCPU threads more
> 'power' than normal host threads is a bit of a red flag IMO. vCPU
> threads must be constrained in the same way that userspace threads are,
> because they _are_ userspace threads.
>
> > > then we can try to work on that. Maybe something a la vhost
> > > can be done to optimize, I'll have a think.
> > >
> > > > The one thing I'd like to understand that the comment seems to imply
> > > > that there is a significant difference in overhead between a hypercall
> > > > and an MMIO. In my experience, both are pretty similar in cost for a
> > > > handling location (both in userspace or both in the kernel). MMIO
> > > > handling is a tiny bit more expensive due to a guaranteed TLB miss
> > > > followed by a walk of the in-kernel device ranges, but that's all. It
> > > > should hardly register.
> > > >
> > > > And if you really want some super-low latency, low overhead
> > > > signalling, maybe an exception is the wrong tool for the job. Shared
> > > > memory communication could be more appropriate.
> > >
> > > I presume some kind of signalling mechanism will be necessary to
> > > synchronously update host scheduling parameters in response to guest
> > > frequency requests, but if the volume of data requires it then a shared
> > > buffer + doorbell type of approach should do.
> >
> > Part of the communication doesn't need synchronous handling by the
> > host. So, what I said above.
>
> I've also replied to another message about the scale invariance issue,
> and I'm not convinced the frequency based interface proposed here really
> makes sense. An AMU-like interface is very likely to be superior.
>

Some sort of AMU-based interface was discussed offline with Saravana,
but I'm not sure how to best implement that. If you have any pointers
to get started, that would be helpful.

> > > Thinking about it, using SCMI over virtio would implement exactly that.
> > > Linux-as-a-guest already supports it IIRC, so possibly the problem
> > > being addressed in this series could be 'simply' solved using an SCMI
> > > backend in the VMM...
> >
> > This will be worse than all the options we've tried so far because it
> > has the userspace overhead AND uclamp overhead.
>
> But it doesn't violate the whole KVM userspace delegation model, so we
> should start from there and then optimize further if need be.

Do you have any references we can experiment with getting started for
SCMI? (ex. SCMI backend support in CrosVM).

For RFC V3, I'll post a CPUfreq driver implementation that only uses
MMIO and without any kernel host modifications(I.E. Only using uclamp
as a knob to tune the host) along with performance numbers and then
work on optimizing from there.

Thanks,
David

>
> Thanks,
> Quentin