[edk2] OvmfPkg/QemuVideoDxe: map framebuffer as write-combining/non-executable

Message ID 20170818130243.4645-1-ard.biesheuvel@linaro.org
State New
Headers show

Commit Message

Ard Biesheuvel Aug. 18, 2017, 1:02 p.m.
When QemuVideoDxe takes control of the framebuffer, it is already
mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records
the base and size from the PCI BAR.

On x86 systems, this is sufficient, but on ARM systems, the semantics
of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,
and treating a region like memory (i.e., dereferencing pointers into it
or using ordinary CopyMem()/SetMem() functions on it) requires that it
be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or
EFI_MEMORY_WB.

Since caching is not appropriate for regions where we rely on side
effects, remap the frame buffer EFI_MEMORY_WT. Given that the ARM
architecture requires device mappings to be non-executable (to avoid
inadvertent speculative instruction fetches from device registers),
retain the non-executable nature by adding the EFI_MEMORY_XP attribute
as well.

Note that the crashes that this patch aims to prevent can currently only
occur under KVM, in which case the VGA driver does not operate correctly
in the first place. However, this is an implementation detail of QEMU
while running under KVM, and given that the ARM architecture simply does
not permit unaligned accesses to device memory, it is best to conform
to this in the code.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

---
 OvmfPkg/QemuVideoDxe/Driver.c         |  5 +++++
 OvmfPkg/QemuVideoDxe/Gop.c            | 18 ++++++++++++++++--
 OvmfPkg/QemuVideoDxe/Qemu.h           |  2 ++
 OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf |  1 +
 4 files changed, 24 insertions(+), 2 deletions(-)

-- 
2.11.0

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel

Comments

Ard Biesheuvel Aug. 18, 2017, 1:04 p.m. | #1
On 18 August 2017 at 14:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> When QemuVideoDxe takes control of the framebuffer, it is already

> mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records

> the base and size from the PCI BAR.

>

> On x86 systems, this is sufficient, but on ARM systems, the semantics

> of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,

> and treating a region like memory (i.e., dereferencing pointers into it

> or using ordinary CopyMem()/SetMem() functions on it) requires that it

> be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or

> EFI_MEMORY_WB.

>

> Since caching is not appropriate for regions where we rely on side

> effects, remap the frame buffer EFI_MEMORY_WT.


EFI_MEMORY_WC not WT

> Given that the ARM

> architecture requires device mappings to be non-executable (to avoid

> inadvertent speculative instruction fetches from device registers),

> retain the non-executable nature by adding the EFI_MEMORY_XP attribute

> as well.

>

> Note that the crashes that this patch aims to prevent can currently only

> occur under KVM, in which case the VGA driver does not operate correctly

> in the first place. However, this is an implementation detail of QEMU

> while running under KVM, and given that the ARM architecture simply does

> not permit unaligned accesses to device memory, it is best to conform

> to this in the code.

>

> Contributed-under: TianoCore Contribution Agreement 1.0

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---

>  OvmfPkg/QemuVideoDxe/Driver.c         |  5 +++++

>  OvmfPkg/QemuVideoDxe/Gop.c            | 18 ++++++++++++++++--

>  OvmfPkg/QemuVideoDxe/Qemu.h           |  2 ++

>  OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf |  1 +

>  4 files changed, 24 insertions(+), 2 deletions(-)

>

> diff --git a/OvmfPkg/QemuVideoDxe/Driver.c b/OvmfPkg/QemuVideoDxe/Driver.c

> index 0dce80e59ba8..d81be49d91f3 100644

> --- a/OvmfPkg/QemuVideoDxe/Driver.c

> +++ b/OvmfPkg/QemuVideoDxe/Driver.c

> @@ -69,6 +69,8 @@ QEMU_VIDEO_CARD gQemuVideoCardList[] = {

>      }

>  };

>

> +EFI_CPU_ARCH_PROTOCOL     *gCpu;

> +

>  static QEMU_VIDEO_CARD*

>  QemuVideoDetect(

>    IN UINT16 VendorId,

> @@ -1103,5 +1105,8 @@ InitializeQemuVideo (

>                    );

>    ASSERT_EFI_ERROR (Status);

>

> +  Status = gBS->LocateProtocol (&gEfiCpuArchProtocolGuid, NULL, (VOID **)&gCpu);

> +  ASSERT_EFI_ERROR(Status);

> +

>    return Status;

>  }

> diff --git a/OvmfPkg/QemuVideoDxe/Gop.c b/OvmfPkg/QemuVideoDxe/Gop.c

> index 512fd27acbda..a820524db293 100644

> --- a/OvmfPkg/QemuVideoDxe/Gop.c

> +++ b/OvmfPkg/QemuVideoDxe/Gop.c

> @@ -53,7 +53,8 @@ QemuVideoCompleteModeData (

>  {

>    EFI_GRAPHICS_OUTPUT_MODE_INFORMATION  *Info;

>    EFI_ACPI_ADDRESS_SPACE_DESCRIPTOR     *FrameBufDesc;

> -  QEMU_VIDEO_MODE_DATA           *ModeData;

> +  QEMU_VIDEO_MODE_DATA                  *ModeData;

> +  EFI_STATUS                            Status;

>

>    ModeData = &Private->ModeData[Mode->Mode];

>    Info = Mode->Info;

> @@ -72,8 +73,21 @@ QemuVideoCompleteModeData (

>    DEBUG ((EFI_D_INFO, "FrameBufferBase: 0x%Lx, FrameBufferSize: 0x%Lx\n",

>      Mode->FrameBufferBase, (UINT64)Mode->FrameBufferSize));

>

> +  //

> +  // Remap the framebuffer region as write combining. On x86 systems, this is

> +  // merely a performance optimization, but on ARM systems, it prevents

> +  // crashes that may result from unaligned accesses, given that we treat the

> +  // frame buffer as ordinary memory by using CopyMem()/SetMem() on it. While

> +  // we're at it, set the non-exec attribute so the framebuffer is not

> +  // exploitable by malware.

> +  //

> +  Status = gCpu->SetMemoryAttributes (gCpu, Mode->FrameBufferBase,

> +                   ALIGN_VALUE (Mode->FrameBufferSize, EFI_PAGE_SIZE),

> +                   EFI_MEMORY_WC | EFI_MEMORY_XP);

> +  ASSERT_EFI_ERROR (Status);

> +

>    FreePool (FrameBufDesc);

> -  return EFI_SUCCESS;

> +  return Status;

>  }

>

>  STATIC

> diff --git a/OvmfPkg/QemuVideoDxe/Qemu.h b/OvmfPkg/QemuVideoDxe/Qemu.h

> index 7fbb25b3efd3..2966c77c78b3 100644

> --- a/OvmfPkg/QemuVideoDxe/Qemu.h

> +++ b/OvmfPkg/QemuVideoDxe/Qemu.h

> @@ -21,6 +21,7 @@

>

>

>  #include <Uefi.h>

> +#include <Protocol/Cpu.h>

>  #include <Protocol/GraphicsOutput.h>

>  #include <Protocol/PciIo.h>

>  #include <Protocol/DriverSupportedEfiVersion.h>

> @@ -164,6 +165,7 @@ extern EFI_DRIVER_BINDING_PROTOCOL                gQemuVideoDriverBinding;

>  extern EFI_COMPONENT_NAME_PROTOCOL                gQemuVideoComponentName;

>  extern EFI_COMPONENT_NAME2_PROTOCOL               gQemuVideoComponentName2;

>  extern EFI_DRIVER_SUPPORTED_EFI_VERSION_PROTOCOL  gQemuVideoDriverSupportedEfiVersion;

> +extern EFI_CPU_ARCH_PROTOCOL                      *gCpu;

>

>  //

>  // Io Registers defined by VGA

> diff --git a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> index 346a5aed94fa..bbe11257c002 100644

> --- a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> +++ b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> @@ -68,6 +68,7 @@ [LibraryClasses]

>    UefiLib

>

>  [Protocols]

> +  gEfiCpuArchProtocolGuid                       # PROTOCOL ALWAYS_CONSUMED

>    gEfiDriverSupportedEfiVersionProtocolGuid     # PROTOCOL ALWAYS_PRODUCED

>    gEfiGraphicsOutputProtocolGuid                # PROTOCOL BY_START

>    gEfiDevicePathProtocolGuid                    # PROTOCOL BY_START

> --

> 2.11.0

>

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Jordan Justen Aug. 18, 2017, 5:20 p.m. | #2
On 2017-08-18 06:04:01, Ard Biesheuvel wrote:
> On 18 August 2017 at 14:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> > When QemuVideoDxe takes control of the framebuffer, it is already

> > mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records

> > the base and size from the PCI BAR.

> >

> > On x86 systems, this is sufficient, but on ARM systems, the semantics

> > of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,

> > and treating a region like memory (i.e., dereferencing pointers into it

> > or using ordinary CopyMem()/SetMem() functions on it) requires that it

> > be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or

> > EFI_MEMORY_WB.

> >

> > Since caching is not appropriate for regions where we rely on side

> > effects, remap the frame buffer EFI_MEMORY_WT.

> 

> EFI_MEMORY_WC not WT


If a single pixel is written, then WC may not write it through
immediately. Would WT be more appropriate?

Did you get a chance to test x86 systems with the change?

> 

> > Given that the ARM

> > architecture requires device mappings to be non-executable (to avoid

> > inadvertent speculative instruction fetches from device registers),

> > retain the non-executable nature by adding the EFI_MEMORY_XP attribute

> > as well.

> >

> > Note that the crashes that this patch aims to prevent can currently only

> > occur under KVM, in which case the VGA driver does not operate correctly

> > in the first place. However, this is an implementation detail of QEMU

> > while running under KVM, and given that the ARM architecture simply does

> > not permit unaligned accesses to device memory, it is best to conform

> > to this in the code.

> >

> > Contributed-under: TianoCore Contribution Agreement 1.0

> > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> > ---

> >  OvmfPkg/QemuVideoDxe/Driver.c         |  5 +++++

> >  OvmfPkg/QemuVideoDxe/Gop.c            | 18 ++++++++++++++++--

> >  OvmfPkg/QemuVideoDxe/Qemu.h           |  2 ++

> >  OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf |  1 +

> >  4 files changed, 24 insertions(+), 2 deletions(-)

> >

> > diff --git a/OvmfPkg/QemuVideoDxe/Driver.c b/OvmfPkg/QemuVideoDxe/Driver.c

> > index 0dce80e59ba8..d81be49d91f3 100644

> > --- a/OvmfPkg/QemuVideoDxe/Driver.c

> > +++ b/OvmfPkg/QemuVideoDxe/Driver.c

> > @@ -69,6 +69,8 @@ QEMU_VIDEO_CARD gQemuVideoCardList[] = {

> >      }

> >  };

> >

> > +EFI_CPU_ARCH_PROTOCOL     *gCpu;

> > +

> >  static QEMU_VIDEO_CARD*

> >  QemuVideoDetect(

> >    IN UINT16 VendorId,

> > @@ -1103,5 +1105,8 @@ InitializeQemuVideo (

> >                    );

> >    ASSERT_EFI_ERROR (Status);

> >

> > +  Status = gBS->LocateProtocol (&gEfiCpuArchProtocolGuid, NULL, (VOID **)&gCpu);

> > +  ASSERT_EFI_ERROR(Status);

> > +

> >    return Status;

> >  }

> > diff --git a/OvmfPkg/QemuVideoDxe/Gop.c b/OvmfPkg/QemuVideoDxe/Gop.c

> > index 512fd27acbda..a820524db293 100644

> > --- a/OvmfPkg/QemuVideoDxe/Gop.c

> > +++ b/OvmfPkg/QemuVideoDxe/Gop.c

> > @@ -53,7 +53,8 @@ QemuVideoCompleteModeData (

> >  {

> >    EFI_GRAPHICS_OUTPUT_MODE_INFORMATION  *Info;

> >    EFI_ACPI_ADDRESS_SPACE_DESCRIPTOR     *FrameBufDesc;

> > -  QEMU_VIDEO_MODE_DATA           *ModeData;

> > +  QEMU_VIDEO_MODE_DATA                  *ModeData;

> > +  EFI_STATUS                            Status;

> >

> >    ModeData = &Private->ModeData[Mode->Mode];

> >    Info = Mode->Info;

> > @@ -72,8 +73,21 @@ QemuVideoCompleteModeData (

> >    DEBUG ((EFI_D_INFO, "FrameBufferBase: 0x%Lx, FrameBufferSize: 0x%Lx\n",

> >      Mode->FrameBufferBase, (UINT64)Mode->FrameBufferSize));

> >

> > +  //

> > +  // Remap the framebuffer region as write combining. On x86 systems, this is

> > +  // merely a performance optimization, but on ARM systems, it prevents

> > +  // crashes that may result from unaligned accesses, given that we treat the

> > +  // frame buffer as ordinary memory by using CopyMem()/SetMem() on it. While

> > +  // we're at it, set the non-exec attribute so the framebuffer is not

> > +  // exploitable by malware.


I'm pretty sure Laszlo would disagree, but I think a more terse
comment would do fine here. (We always have git blame to get the full
story.)

//
// Set the framebuffer region as write combining (through?) and
// non-executable. For ARM the memory range can't be left in the
// uncachable state.
//

> > +  //

> > +  Status = gCpu->SetMemoryAttributes (gCpu, Mode->FrameBufferBase,

> > +                   ALIGN_VALUE (Mode->FrameBufferSize, EFI_PAGE_SIZE),

> > +                   EFI_MEMORY_WC | EFI_MEMORY_XP);

> > +  ASSERT_EFI_ERROR (Status);

> > +

> >    FreePool (FrameBufDesc);

> > -  return EFI_SUCCESS;

> > +  return Status;

> >  }

> >

> >  STATIC

> > diff --git a/OvmfPkg/QemuVideoDxe/Qemu.h b/OvmfPkg/QemuVideoDxe/Qemu.h

> > index 7fbb25b3efd3..2966c77c78b3 100644

> > --- a/OvmfPkg/QemuVideoDxe/Qemu.h

> > +++ b/OvmfPkg/QemuVideoDxe/Qemu.h

> > @@ -21,6 +21,7 @@

> >

> >

> >  #include <Uefi.h>

> > +#include <Protocol/Cpu.h>

> >  #include <Protocol/GraphicsOutput.h>

> >  #include <Protocol/PciIo.h>

> >  #include <Protocol/DriverSupportedEfiVersion.h>

> > @@ -164,6 +165,7 @@ extern EFI_DRIVER_BINDING_PROTOCOL                gQemuVideoDriverBinding;

> >  extern EFI_COMPONENT_NAME_PROTOCOL                gQemuVideoComponentName;

> >  extern EFI_COMPONENT_NAME2_PROTOCOL               gQemuVideoComponentName2;

> >  extern EFI_DRIVER_SUPPORTED_EFI_VERSION_PROTOCOL  gQemuVideoDriverSupportedEfiVersion;

> > +extern EFI_CPU_ARCH_PROTOCOL                      *gCpu;

> >

> >  //

> >  // Io Registers defined by VGA

> > diff --git a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> > index 346a5aed94fa..bbe11257c002 100644

> > --- a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> > +++ b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> > @@ -68,6 +68,7 @@ [LibraryClasses]

> >    UefiLib

> >

> >  [Protocols]

> > +  gEfiCpuArchProtocolGuid                       # PROTOCOL ALWAYS_CONSUMED


I don't think a 'Driver Model' driver needs to add arch protocols into
the depex.

-Jordan

> >    gEfiDriverSupportedEfiVersionProtocolGuid     # PROTOCOL ALWAYS_PRODUCED

> >    gEfiGraphicsOutputProtocolGuid                # PROTOCOL BY_START

> >    gEfiDevicePathProtocolGuid                    # PROTOCOL BY_START

> > --

> > 2.11.0

> >

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Ard Biesheuvel Aug. 18, 2017, 5:25 p.m. | #3
On 18 August 2017 at 18:20, Jordan Justen <jordan.l.justen@intel.com> wrote:
> On 2017-08-18 06:04:01, Ard Biesheuvel wrote:

>> On 18 August 2017 at 14:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

>> > When QemuVideoDxe takes control of the framebuffer, it is already

>> > mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records

>> > the base and size from the PCI BAR.

>> >

>> > On x86 systems, this is sufficient, but on ARM systems, the semantics

>> > of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,

>> > and treating a region like memory (i.e., dereferencing pointers into it

>> > or using ordinary CopyMem()/SetMem() functions on it) requires that it

>> > be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or

>> > EFI_MEMORY_WB.

>> >

>> > Since caching is not appropriate for regions where we rely on side

>> > effects, remap the frame buffer EFI_MEMORY_WT.

>>

>> EFI_MEMORY_WC not WT

>

> If a single pixel is written, then WC may not write it through

> immediately. Would WT be more appropriate?

>


For ARM, that applies equally to WT AFAIK.

> Did you get a chance to test x86 systems with the change?

>


No, I haven't yet.

>>

>> > Given that the ARM

>> > architecture requires device mappings to be non-executable (to avoid

>> > inadvertent speculative instruction fetches from device registers),

>> > retain the non-executable nature by adding the EFI_MEMORY_XP attribute

>> > as well.

>> >

>> > Note that the crashes that this patch aims to prevent can currently only

>> > occur under KVM, in which case the VGA driver does not operate correctly

>> > in the first place. However, this is an implementation detail of QEMU

>> > while running under KVM, and given that the ARM architecture simply does

>> > not permit unaligned accesses to device memory, it is best to conform

>> > to this in the code.

>> >

>> > Contributed-under: TianoCore Contribution Agreement 1.0

>> > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

>> > ---

>> >  OvmfPkg/QemuVideoDxe/Driver.c         |  5 +++++

>> >  OvmfPkg/QemuVideoDxe/Gop.c            | 18 ++++++++++++++++--

>> >  OvmfPkg/QemuVideoDxe/Qemu.h           |  2 ++

>> >  OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf |  1 +

>> >  4 files changed, 24 insertions(+), 2 deletions(-)

>> >

>> > diff --git a/OvmfPkg/QemuVideoDxe/Driver.c b/OvmfPkg/QemuVideoDxe/Driver.c

>> > index 0dce80e59ba8..d81be49d91f3 100644

>> > --- a/OvmfPkg/QemuVideoDxe/Driver.c

>> > +++ b/OvmfPkg/QemuVideoDxe/Driver.c

>> > @@ -69,6 +69,8 @@ QEMU_VIDEO_CARD gQemuVideoCardList[] = {

>> >      }

>> >  };

>> >

>> > +EFI_CPU_ARCH_PROTOCOL     *gCpu;

>> > +

>> >  static QEMU_VIDEO_CARD*

>> >  QemuVideoDetect(

>> >    IN UINT16 VendorId,

>> > @@ -1103,5 +1105,8 @@ InitializeQemuVideo (

>> >                    );

>> >    ASSERT_EFI_ERROR (Status);

>> >

>> > +  Status = gBS->LocateProtocol (&gEfiCpuArchProtocolGuid, NULL, (VOID **)&gCpu);

>> > +  ASSERT_EFI_ERROR(Status);

>> > +

>> >    return Status;

>> >  }

>> > diff --git a/OvmfPkg/QemuVideoDxe/Gop.c b/OvmfPkg/QemuVideoDxe/Gop.c

>> > index 512fd27acbda..a820524db293 100644

>> > --- a/OvmfPkg/QemuVideoDxe/Gop.c

>> > +++ b/OvmfPkg/QemuVideoDxe/Gop.c

>> > @@ -53,7 +53,8 @@ QemuVideoCompleteModeData (

>> >  {

>> >    EFI_GRAPHICS_OUTPUT_MODE_INFORMATION  *Info;

>> >    EFI_ACPI_ADDRESS_SPACE_DESCRIPTOR     *FrameBufDesc;

>> > -  QEMU_VIDEO_MODE_DATA           *ModeData;

>> > +  QEMU_VIDEO_MODE_DATA                  *ModeData;

>> > +  EFI_STATUS                            Status;

>> >

>> >    ModeData = &Private->ModeData[Mode->Mode];

>> >    Info = Mode->Info;

>> > @@ -72,8 +73,21 @@ QemuVideoCompleteModeData (

>> >    DEBUG ((EFI_D_INFO, "FrameBufferBase: 0x%Lx, FrameBufferSize: 0x%Lx\n",

>> >      Mode->FrameBufferBase, (UINT64)Mode->FrameBufferSize));

>> >

>> > +  //

>> > +  // Remap the framebuffer region as write combining. On x86 systems, this is

>> > +  // merely a performance optimization, but on ARM systems, it prevents

>> > +  // crashes that may result from unaligned accesses, given that we treat the

>> > +  // frame buffer as ordinary memory by using CopyMem()/SetMem() on it. While

>> > +  // we're at it, set the non-exec attribute so the framebuffer is not

>> > +  // exploitable by malware.

>

> I'm pretty sure Laszlo would disagree, but I think a more terse

> comment would do fine here. (We always have git blame to get the full

> story.)

>

> //

> // Set the framebuffer region as write combining (through?) and

> // non-executable. For ARM the memory range can't be left in the

> // uncachable state.

> //

>


Fine by me.

>> > +  //

>> > +  Status = gCpu->SetMemoryAttributes (gCpu, Mode->FrameBufferBase,

>> > +                   ALIGN_VALUE (Mode->FrameBufferSize, EFI_PAGE_SIZE),

>> > +                   EFI_MEMORY_WC | EFI_MEMORY_XP);

>> > +  ASSERT_EFI_ERROR (Status);

>> > +

>> >    FreePool (FrameBufDesc);

>> > -  return EFI_SUCCESS;

>> > +  return Status;

>> >  }

>> >

>> >  STATIC

>> > diff --git a/OvmfPkg/QemuVideoDxe/Qemu.h b/OvmfPkg/QemuVideoDxe/Qemu.h

>> > index 7fbb25b3efd3..2966c77c78b3 100644

>> > --- a/OvmfPkg/QemuVideoDxe/Qemu.h

>> > +++ b/OvmfPkg/QemuVideoDxe/Qemu.h

>> > @@ -21,6 +21,7 @@

>> >

>> >

>> >  #include <Uefi.h>

>> > +#include <Protocol/Cpu.h>

>> >  #include <Protocol/GraphicsOutput.h>

>> >  #include <Protocol/PciIo.h>

>> >  #include <Protocol/DriverSupportedEfiVersion.h>

>> > @@ -164,6 +165,7 @@ extern EFI_DRIVER_BINDING_PROTOCOL                gQemuVideoDriverBinding;

>> >  extern EFI_COMPONENT_NAME_PROTOCOL                gQemuVideoComponentName;

>> >  extern EFI_COMPONENT_NAME2_PROTOCOL               gQemuVideoComponentName2;

>> >  extern EFI_DRIVER_SUPPORTED_EFI_VERSION_PROTOCOL  gQemuVideoDriverSupportedEfiVersion;

>> > +extern EFI_CPU_ARCH_PROTOCOL                      *gCpu;

>> >

>> >  //

>> >  // Io Registers defined by VGA

>> > diff --git a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

>> > index 346a5aed94fa..bbe11257c002 100644

>> > --- a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

>> > +++ b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

>> > @@ -68,6 +68,7 @@ [LibraryClasses]

>> >    UefiLib

>> >

>> >  [Protocols]

>> > +  gEfiCpuArchProtocolGuid                       # PROTOCOL ALWAYS_CONSUMED

>

> I don't think a 'Driver Model' driver needs to add arch protocols into

> the depex.

>


To be pedantic: this is not the depex. You can't rely on the protocol
header to declare gEfiCpuArchProtocolGuid, and declaring it here makes
the build tools add its declaration to AutoGen.h (I think this has to
do with the exact .dsc version. Perhaps Laszlo has a better
recollection of the details.)
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Jordan Justen Aug. 18, 2017, 5:49 p.m. | #4
On 2017-08-18 10:25:14, Ard Biesheuvel wrote:
> On 18 August 2017 at 18:20, Jordan Justen <jordan.l.justen@intel.com> wrote:

> > On 2017-08-18 06:04:01, Ard Biesheuvel wrote:

> >> On 18 August 2017 at 14:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> >> > When QemuVideoDxe takes control of the framebuffer, it is already

> >> > mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records

> >> > the base and size from the PCI BAR.

> >> >

> >> > On x86 systems, this is sufficient, but on ARM systems, the semantics

> >> > of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,

> >> > and treating a region like memory (i.e., dereferencing pointers into it

> >> > or using ordinary CopyMem()/SetMem() functions on it) requires that it

> >> > be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or

> >> > EFI_MEMORY_WB.

> >> >

> >> > Since caching is not appropriate for regions where we rely on side

> >> > effects, remap the frame buffer EFI_MEMORY_WT.

> >>

> >> EFI_MEMORY_WC not WT

> >

> > If a single pixel is written, then WC may not write it through

> > immediately. Would WT be more appropriate?

> >

> 

> For ARM, that applies equally to WT AFAIK.


Write-through will not actually write-*through*?

I'm not sure how well QEMU/KVM models this for x86 or ARM.

> >> >

> >> >  [Protocols]

> >> > +  gEfiCpuArchProtocolGuid                       # PROTOCOL ALWAYS_CONSUMED

> >

> > I don't think a 'Driver Model' driver needs to add arch protocols into

> > the depex.

> >

> 

> To be pedantic: this is not the depex. You can't rely on the protocol

> header to declare gEfiCpuArchProtocolGuid, and declaring it here makes

> the build tools add its declaration to AutoGen.h (I think this has to

> do with the exact .dsc version. Perhaps Laszlo has a better

> recollection of the details.)


Whoops. You are correct. We want this change...

-Jordan
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Leif Lindholm Aug. 18, 2017, 6:08 p.m. | #5
On Fri, Aug 18, 2017 at 10:49:35AM -0700, Jordan Justen wrote:
> On 2017-08-18 10:25:14, Ard Biesheuvel wrote:

> > On 18 August 2017 at 18:20, Jordan Justen <jordan.l.justen@intel.com> wrote:

> > > On 2017-08-18 06:04:01, Ard Biesheuvel wrote:

> > >> On 18 August 2017 at 14:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> > >> > When QemuVideoDxe takes control of the framebuffer, it is already

> > >> > mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records

> > >> > the base and size from the PCI BAR.

> > >> >

> > >> > On x86 systems, this is sufficient, but on ARM systems, the semantics

> > >> > of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,

> > >> > and treating a region like memory (i.e., dereferencing pointers into it

> > >> > or using ordinary CopyMem()/SetMem() functions on it) requires that it

> > >> > be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or

> > >> > EFI_MEMORY_WB.

> > >> >

> > >> > Since caching is not appropriate for regions where we rely on side

> > >> > effects, remap the frame buffer EFI_MEMORY_WT.

> > >>

> > >> EFI_MEMORY_WC not WT

> > >

> > > If a single pixel is written, then WC may not write it through

> > > immediately. Would WT be more appropriate?

> > 

> > For ARM, that applies equally to WT AFAIK.

> 

> Write-through will not actually write-*through*?


Not immediately, no.

Conversely, does IA32/X64 WC not guarantee writing out within a finite period?

/
    Leif

> I'm not sure how well QEMU/KVM models this for x86 or ARM.

> 

> > >> >

> > >> >  [Protocols]

> > >> > +  gEfiCpuArchProtocolGuid                       # PROTOCOL ALWAYS_CONSUMED

> > >

> > > I don't think a 'Driver Model' driver needs to add arch protocols into

> > > the depex.

> > >

> > 

> > To be pedantic: this is not the depex. You can't rely on the protocol

> > header to declare gEfiCpuArchProtocolGuid, and declaring it here makes

> > the build tools add its declaration to AutoGen.h (I think this has to

> > do with the exact .dsc version. Perhaps Laszlo has a better

> > recollection of the details.)

> 

> Whoops. You are correct. We want this change...

> 

> -Jordan

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Ard Biesheuvel Aug. 18, 2017, 6:16 p.m. | #6
'On 18 August 2017 at 18:49, Jordan Justen <jordan.l.justen@intel.com> wrote:
> On 2017-08-18 10:25:14, Ard Biesheuvel wrote:

>> On 18 August 2017 at 18:20, Jordan Justen <jordan.l.justen@intel.com> wrote:

>> > On 2017-08-18 06:04:01, Ard Biesheuvel wrote:

>> >> On 18 August 2017 at 14:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

>> >> > When QemuVideoDxe takes control of the framebuffer, it is already

>> >> > mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records

>> >> > the base and size from the PCI BAR.

>> >> >

>> >> > On x86 systems, this is sufficient, but on ARM systems, the semantics

>> >> > of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,

>> >> > and treating a region like memory (i.e., dereferencing pointers into it

>> >> > or using ordinary CopyMem()/SetMem() functions on it) requires that it

>> >> > be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or

>> >> > EFI_MEMORY_WB.

>> >> >

>> >> > Since caching is not appropriate for regions where we rely on side

>> >> > effects, remap the frame buffer EFI_MEMORY_WT.

>> >>

>> >> EFI_MEMORY_WC not WT

>> >

>> > If a single pixel is written, then WC may not write it through

>> > immediately. Would WT be more appropriate?

>> >

>>

>> For ARM, that applies equally to WT AFAIK.

>

> Write-through will not actually write-*through*?

>


Both WC and WT go through a write buffer which may reorder writes and
does not offer any timeliness guarantees AFAIK.

> I'm not sure how well QEMU/KVM models this for x86 or ARM.

>


On ARM, we have coherency issues with framebuffers that are backed by
host memory that is mapped cacheable, because writes via the
uncached/write-through mapping of the guest do not invalidate
cachelines that cover it. Hence the 'the VGA driver does not operate
correctly in the first place' in the commit log. But crashing on an
alignment fault is unfriendly, to say the least, and I would prefer to
adhere to the architectural requirements in any case.

>> >> >

>> >> >  [Protocols]

>> >> > +  gEfiCpuArchProtocolGuid                       # PROTOCOL ALWAYS_CONSUMED

>> >

>> > I don't think a 'Driver Model' driver needs to add arch protocols into

>> > the depex.

>> >

>>

>> To be pedantic: this is not the depex. You can't rely on the protocol

>> header to declare gEfiCpuArchProtocolGuid, and declaring it here makes

>> the build tools add its declaration to AutoGen.h (I think this has to

>> do with the exact .dsc version. Perhaps Laszlo has a better

>> recollection of the details.)

>

> Whoops. You are correct. We want this change...

>

> -Jordan

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Leif Lindholm Aug. 18, 2017, 6:26 p.m. | #7
On Fri, Aug 18, 2017 at 02:02:43PM +0100, Ard Biesheuvel wrote:
> When QemuVideoDxe takes control of the framebuffer, it is already

> mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records

> the base and size from the PCI BAR.

> 

> On x86 systems, this is sufficient, but on ARM systems, the semantics

> of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,

> and treating a region like memory (i.e., dereferencing pointers into it

> or using ordinary CopyMem()/SetMem() functions on it) requires that it

> be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or

> EFI_MEMORY_WB.

> 

> Since caching is not appropriate for regions where we rely on side

> effects, remap the frame buffer EFI_MEMORY_WT. Given that the ARM

> architecture requires device mappings to be non-executable (to avoid

> inadvertent speculative instruction fetches from device registers),

> retain the non-executable nature by adding the EFI_MEMORY_XP attribute

> as well.

> 

> Note that the crashes that this patch aims to prevent can currently only

> occur under KVM, in which case the VGA driver does not operate correctly

> in the first place. However, this is an implementation detail of QEMU

> while running under KVM, and given that the ARM architecture simply does

> not permit unaligned accesses to device memory, it is best to conform

> to this in the code.

> 

> Contributed-under: TianoCore Contribution Agreement 1.0

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>


From an ARM perspective, this change is clearly correct.
If it is acceptable on IA32/X64 also (with the WT->WC fix in message):
Reviewed-by: Leif Lindholm <leif.lindholm@linaro.org>


If not, we may need some architecture-specicic intermediary.

/
    Leif

> ---

>  OvmfPkg/QemuVideoDxe/Driver.c         |  5 +++++

>  OvmfPkg/QemuVideoDxe/Gop.c            | 18 ++++++++++++++++--

>  OvmfPkg/QemuVideoDxe/Qemu.h           |  2 ++

>  OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf |  1 +

>  4 files changed, 24 insertions(+), 2 deletions(-)

> 

> diff --git a/OvmfPkg/QemuVideoDxe/Driver.c b/OvmfPkg/QemuVideoDxe/Driver.c

> index 0dce80e59ba8..d81be49d91f3 100644

> --- a/OvmfPkg/QemuVideoDxe/Driver.c

> +++ b/OvmfPkg/QemuVideoDxe/Driver.c

> @@ -69,6 +69,8 @@ QEMU_VIDEO_CARD gQemuVideoCardList[] = {

>      }

>  };

>  

> +EFI_CPU_ARCH_PROTOCOL     *gCpu;

> +

>  static QEMU_VIDEO_CARD*

>  QemuVideoDetect(

>    IN UINT16 VendorId,

> @@ -1103,5 +1105,8 @@ InitializeQemuVideo (

>                    );

>    ASSERT_EFI_ERROR (Status);

>  

> +  Status = gBS->LocateProtocol (&gEfiCpuArchProtocolGuid, NULL, (VOID **)&gCpu);

> +  ASSERT_EFI_ERROR(Status);

> +

>    return Status;

>  }

> diff --git a/OvmfPkg/QemuVideoDxe/Gop.c b/OvmfPkg/QemuVideoDxe/Gop.c

> index 512fd27acbda..a820524db293 100644

> --- a/OvmfPkg/QemuVideoDxe/Gop.c

> +++ b/OvmfPkg/QemuVideoDxe/Gop.c

> @@ -53,7 +53,8 @@ QemuVideoCompleteModeData (

>  {

>    EFI_GRAPHICS_OUTPUT_MODE_INFORMATION  *Info;

>    EFI_ACPI_ADDRESS_SPACE_DESCRIPTOR     *FrameBufDesc;

> -  QEMU_VIDEO_MODE_DATA           *ModeData;

> +  QEMU_VIDEO_MODE_DATA                  *ModeData;

> +  EFI_STATUS                            Status;

>  

>    ModeData = &Private->ModeData[Mode->Mode];

>    Info = Mode->Info;

> @@ -72,8 +73,21 @@ QemuVideoCompleteModeData (

>    DEBUG ((EFI_D_INFO, "FrameBufferBase: 0x%Lx, FrameBufferSize: 0x%Lx\n",

>      Mode->FrameBufferBase, (UINT64)Mode->FrameBufferSize));

>  

> +  //

> +  // Remap the framebuffer region as write combining. On x86 systems, this is

> +  // merely a performance optimization, but on ARM systems, it prevents

> +  // crashes that may result from unaligned accesses, given that we treat the

> +  // frame buffer as ordinary memory by using CopyMem()/SetMem() on it. While

> +  // we're at it, set the non-exec attribute so the framebuffer is not

> +  // exploitable by malware.

> +  //

> +  Status = gCpu->SetMemoryAttributes (gCpu, Mode->FrameBufferBase,

> +                   ALIGN_VALUE (Mode->FrameBufferSize, EFI_PAGE_SIZE),

> +                   EFI_MEMORY_WC | EFI_MEMORY_XP);

> +  ASSERT_EFI_ERROR (Status);

> +

>    FreePool (FrameBufDesc);

> -  return EFI_SUCCESS;

> +  return Status;

>  }

>  

>  STATIC

> diff --git a/OvmfPkg/QemuVideoDxe/Qemu.h b/OvmfPkg/QemuVideoDxe/Qemu.h

> index 7fbb25b3efd3..2966c77c78b3 100644

> --- a/OvmfPkg/QemuVideoDxe/Qemu.h

> +++ b/OvmfPkg/QemuVideoDxe/Qemu.h

> @@ -21,6 +21,7 @@

>  

>  

>  #include <Uefi.h>

> +#include <Protocol/Cpu.h>

>  #include <Protocol/GraphicsOutput.h>

>  #include <Protocol/PciIo.h>

>  #include <Protocol/DriverSupportedEfiVersion.h>

> @@ -164,6 +165,7 @@ extern EFI_DRIVER_BINDING_PROTOCOL                gQemuVideoDriverBinding;

>  extern EFI_COMPONENT_NAME_PROTOCOL                gQemuVideoComponentName;

>  extern EFI_COMPONENT_NAME2_PROTOCOL               gQemuVideoComponentName2;

>  extern EFI_DRIVER_SUPPORTED_EFI_VERSION_PROTOCOL  gQemuVideoDriverSupportedEfiVersion;

> +extern EFI_CPU_ARCH_PROTOCOL                      *gCpu;

>  

>  //

>  // Io Registers defined by VGA

> diff --git a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> index 346a5aed94fa..bbe11257c002 100644

> --- a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> +++ b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> @@ -68,6 +68,7 @@ [LibraryClasses]

>    UefiLib

>  

>  [Protocols]

> +  gEfiCpuArchProtocolGuid                       # PROTOCOL ALWAYS_CONSUMED

>    gEfiDriverSupportedEfiVersionProtocolGuid     # PROTOCOL ALWAYS_PRODUCED

>    gEfiGraphicsOutputProtocolGuid                # PROTOCOL BY_START

>    gEfiDevicePathProtocolGuid                    # PROTOCOL BY_START

> -- 

> 2.11.0

> 

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Jordan Justen Aug. 18, 2017, 7:36 p.m. | #8
On 2017-08-18 11:08:23, Leif Lindholm wrote:
> On Fri, Aug 18, 2017 at 10:49:35AM -0700, Jordan Justen wrote:

> > On 2017-08-18 10:25:14, Ard Biesheuvel wrote:

> > > On 18 August 2017 at 18:20, Jordan Justen <jordan.l.justen@intel.com> wrote:

> > > > On 2017-08-18 06:04:01, Ard Biesheuvel wrote:

> > > >> On 18 August 2017 at 14:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

> > > >> > When QemuVideoDxe takes control of the framebuffer, it is already

> > > >> > mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records

> > > >> > the base and size from the PCI BAR.

> > > >> >

> > > >> > On x86 systems, this is sufficient, but on ARM systems, the semantics

> > > >> > of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,

> > > >> > and treating a region like memory (i.e., dereferencing pointers into it

> > > >> > or using ordinary CopyMem()/SetMem() functions on it) requires that it

> > > >> > be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or

> > > >> > EFI_MEMORY_WB.

> > > >> >

> > > >> > Since caching is not appropriate for regions where we rely on side

> > > >> > effects, remap the frame buffer EFI_MEMORY_WT.

> > > >>

> > > >> EFI_MEMORY_WC not WT

> > > >

> > > > If a single pixel is written, then WC may not write it through

> > > > immediately. Would WT be more appropriate?

> > > 

> > > For ARM, that applies equally to WT AFAIK.

> > 

> > Write-through will not actually write-*through*?

> 

> Not immediately, no.


Hmm, looking the SDM, it looks like WT is similar to WC for writes,
but it may service reads from cache, whereas WC will not.

Since I assume reading the FB is somewhat rare, it sounds like WC is
probably better.

> 

> Conversely, does IA32/X64 WC not guarantee writing out within a

> finite period?


I'm not sure, but the SDM does say:

"If the WC buffer is partially filled, the writes may be delayed until
 the next occurrence of a serializing event; such as, an SFENCE or
 MFENCE instruction, CPUID execution, a read or write to uncached
 memory, an interrupt occurrence, or a LOCK instruction execution."

And, even more arguing in favor of your change:

"This type of cache-control is appropriate for video frame buffers,
 where the order of writes is unimportant as long as the writes update
 memory so they can be seen on the graphics display."

So, it sound like there is no strict guarantee, but practically
speaking it will be flushed often enough that a user will never notice
the difference.

Based on this, if you tweak the comment, and test on X64, then:

Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>

_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Laszlo Ersek Aug. 18, 2017, 9:42 p.m. | #9
comments at the bottom

On 08/18/17 15:02, Ard Biesheuvel wrote:
> When QemuVideoDxe takes control of the framebuffer, it is already

> mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records

> the base and size from the PCI BAR.

> 

> On x86 systems, this is sufficient, but on ARM systems, the semantics

> of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,

> and treating a region like memory (i.e., dereferencing pointers into it

> or using ordinary CopyMem()/SetMem() functions on it) requires that it

> be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or

> EFI_MEMORY_WB.

> 

> Since caching is not appropriate for regions where we rely on side

> effects, remap the frame buffer EFI_MEMORY_WT. Given that the ARM

> architecture requires device mappings to be non-executable (to avoid

> inadvertent speculative instruction fetches from device registers),

> retain the non-executable nature by adding the EFI_MEMORY_XP attribute

> as well.

> 

> Note that the crashes that this patch aims to prevent can currently only

> occur under KVM, in which case the VGA driver does not operate correctly

> in the first place. However, this is an implementation detail of QEMU

> while running under KVM, and given that the ARM architecture simply does

> not permit unaligned accesses to device memory, it is best to conform

> to this in the code.

> 

> Contributed-under: TianoCore Contribution Agreement 1.0

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---

>  OvmfPkg/QemuVideoDxe/Driver.c         |  5 +++++

>  OvmfPkg/QemuVideoDxe/Gop.c            | 18 ++++++++++++++++--

>  OvmfPkg/QemuVideoDxe/Qemu.h           |  2 ++

>  OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf |  1 +

>  4 files changed, 24 insertions(+), 2 deletions(-)

> 

> diff --git a/OvmfPkg/QemuVideoDxe/Driver.c b/OvmfPkg/QemuVideoDxe/Driver.c

> index 0dce80e59ba8..d81be49d91f3 100644

> --- a/OvmfPkg/QemuVideoDxe/Driver.c

> +++ b/OvmfPkg/QemuVideoDxe/Driver.c

> @@ -69,6 +69,8 @@ QEMU_VIDEO_CARD gQemuVideoCardList[] = {

>      }

>  };

>  

> +EFI_CPU_ARCH_PROTOCOL     *gCpu;

> +

>  static QEMU_VIDEO_CARD*

>  QemuVideoDetect(

>    IN UINT16 VendorId,

> @@ -1103,5 +1105,8 @@ InitializeQemuVideo (

>                    );

>    ASSERT_EFI_ERROR (Status);

>  

> +  Status = gBS->LocateProtocol (&gEfiCpuArchProtocolGuid, NULL, (VOID **)&gCpu);

> +  ASSERT_EFI_ERROR(Status);

> +

>    return Status;

>  }

> diff --git a/OvmfPkg/QemuVideoDxe/Gop.c b/OvmfPkg/QemuVideoDxe/Gop.c

> index 512fd27acbda..a820524db293 100644

> --- a/OvmfPkg/QemuVideoDxe/Gop.c

> +++ b/OvmfPkg/QemuVideoDxe/Gop.c

> @@ -53,7 +53,8 @@ QemuVideoCompleteModeData (

>  {

>    EFI_GRAPHICS_OUTPUT_MODE_INFORMATION  *Info;

>    EFI_ACPI_ADDRESS_SPACE_DESCRIPTOR     *FrameBufDesc;

> -  QEMU_VIDEO_MODE_DATA           *ModeData;

> +  QEMU_VIDEO_MODE_DATA                  *ModeData;

> +  EFI_STATUS                            Status;

>  

>    ModeData = &Private->ModeData[Mode->Mode];

>    Info = Mode->Info;

> @@ -72,8 +73,21 @@ QemuVideoCompleteModeData (

>    DEBUG ((EFI_D_INFO, "FrameBufferBase: 0x%Lx, FrameBufferSize: 0x%Lx\n",

>      Mode->FrameBufferBase, (UINT64)Mode->FrameBufferSize));

>  

> +  //

> +  // Remap the framebuffer region as write combining. On x86 systems, this is

> +  // merely a performance optimization, but on ARM systems, it prevents

> +  // crashes that may result from unaligned accesses, given that we treat the

> +  // frame buffer as ordinary memory by using CopyMem()/SetMem() on it. While

> +  // we're at it, set the non-exec attribute so the framebuffer is not

> +  // exploitable by malware.

> +  //

> +  Status = gCpu->SetMemoryAttributes (gCpu, Mode->FrameBufferBase,

> +                   ALIGN_VALUE (Mode->FrameBufferSize, EFI_PAGE_SIZE),

> +                   EFI_MEMORY_WC | EFI_MEMORY_XP);

> +  ASSERT_EFI_ERROR (Status);

> +

>    FreePool (FrameBufDesc);

> -  return EFI_SUCCESS;

> +  return Status;

>  }

>  

>  STATIC

> diff --git a/OvmfPkg/QemuVideoDxe/Qemu.h b/OvmfPkg/QemuVideoDxe/Qemu.h

> index 7fbb25b3efd3..2966c77c78b3 100644

> --- a/OvmfPkg/QemuVideoDxe/Qemu.h

> +++ b/OvmfPkg/QemuVideoDxe/Qemu.h

> @@ -21,6 +21,7 @@

>  

>  

>  #include <Uefi.h>

> +#include <Protocol/Cpu.h>

>  #include <Protocol/GraphicsOutput.h>

>  #include <Protocol/PciIo.h>

>  #include <Protocol/DriverSupportedEfiVersion.h>

> @@ -164,6 +165,7 @@ extern EFI_DRIVER_BINDING_PROTOCOL                gQemuVideoDriverBinding;

>  extern EFI_COMPONENT_NAME_PROTOCOL                gQemuVideoComponentName;

>  extern EFI_COMPONENT_NAME2_PROTOCOL               gQemuVideoComponentName2;

>  extern EFI_DRIVER_SUPPORTED_EFI_VERSION_PROTOCOL  gQemuVideoDriverSupportedEfiVersion;

> +extern EFI_CPU_ARCH_PROTOCOL                      *gCpu;

>  

>  //

>  // Io Registers defined by VGA

> diff --git a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> index 346a5aed94fa..bbe11257c002 100644

> --- a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> +++ b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

> @@ -68,6 +68,7 @@ [LibraryClasses]

>    UefiLib

>  

>  [Protocols]

> +  gEfiCpuArchProtocolGuid                       # PROTOCOL ALWAYS_CONSUMED

>    gEfiDriverSupportedEfiVersionProtocolGuid     # PROTOCOL ALWAYS_PRODUCED

>    gEfiGraphicsOutputProtocolGuid                # PROTOCOL BY_START

>    gEfiDevicePathProtocolGuid                    # PROTOCOL BY_START

> 


(1) When we added VirtioGpuDxe to the ArmVirtPkg platforms, the only
reason I didn't propose removing QemuVideoDxe from the same platforms
was that QemuVideoDxe was usable on QEMU/TCG, and I figured it wouldn't
hurt to keep it.

Other than that, I see zero point in using this driver on ARM. (And,
apparently, it does hurt to keep it.)

Can we please consider simply removing this driver from the ArmVirtPkg
platforms? (And then some now-conditional compilation could be
simplified in the driver too!)


(2) If there is an agreement that the suggested changes are required (or
"more correct") for x86 as well, then I don't think we should directly
access the CPU arch protocol.

While it is true that any UEFI driver has a built-in depex on all the
*UEFI* architectural protocols, the CPU arch protocol is a PI/DXE arch
protocol, not a UEFI one.

(2a) Instead, the PciIo protocol has member functions like
GetBarAttributes() and SetBarAttributes(). The former can retrieve what
attributes a BAR supports -- and this member func is already called just
above the addition in QemuVideoCompleteModeData() --, whereas the latter
member can apply those attributes. The attributes I have in mind are
EFI_PCI_IO_ATTRIBUTE_MEMORY_WRITE_COMBINE and/or
EFI_PCI_IO_ATTRIBUTE_MEMORY_CACHED. And, the "executable or not"
question should be handled by the PciIo implementation internally (all
BARs should be mapped noexec automatically), so we shouldn't have to
care about that.

... On the other hand, I see comments in PciIoSetBarAttributes()
[MdeModulePkg/Bus/Pci/PciBusDxe/PciIo.c] that make me doubt this
approach would be viable in practice. I guess it should be tested.

(2b) Even if we can't use PciIo for this, I feel that we should still
use the highest-level service that we can, and then I would recommend
gDS->SetMemorySpaceAttributes(). Note that the CPU arch protocol is
introduced like this in the PI spec:

    Abstracts the processor services that are required to implement some
    of the DXE services. This protocol must be produced by a boot
    service or runtime DXE driver and may only be consumed by the DXE
    Foundation and DXE drivers that produce architectural protocols.

IOW, if we decide to explicitly depend on PI/DXE in this UEFI driver,
then at least go through the most abstract DXE service that applies.


(3) More closely regarding the actual patch:

The memory attributes are massaged in QemuVideoCompleteModeData(), but
not in QemuVideoVmwareSvgaCompleteModeData(). (Both are called from
QemuVideoGraphicsOutputSetMode(), dependent on the QEMU video card
model.) I believe the omission is unintended.

(If you agree that this is becoming messy and hard to test, especially
on aarch64/KVM, then please see my point (1).)


(4) I always consider the #inclusion of Protocol and Guid headers
necessary for compilation, and the addition of the same to [Protocols]
and [Guids] in the INF file necessary for linking.

Modifying the INF file might (more recently) pull the necessary GUID
declarations and initializer macros into AutoGen.h as well. But, the
same definitely doesn't work with libraries (i.e., if you add a lib
class to [LibraryClasses], you won't get the lib class header #included
automatically!). So, for consistency, I always do both the #include and
the INF modification.


(5) I don't always insist on very long comment blocks :)


(6) Please update your edk2 git signoff template to "TianoCore
Contribution Agreement 1.1" (from "1.0").

Thanks,
Laszlo
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Ard Biesheuvel Aug. 18, 2017, 9:51 p.m. | #10
On 18 August 2017 at 22:42, Laszlo Ersek <lersek@redhat.com> wrote:
> comments at the bottom

>

> On 08/18/17 15:02, Ard Biesheuvel wrote:

>> When QemuVideoDxe takes control of the framebuffer, it is already

>> mapped EFI_MEMORY_UC by core code, and QemuVideoDxe simply records

>> the base and size from the PCI BAR.

>>

>> On x86 systems, this is sufficient, but on ARM systems, the semantics

>> of EFI_MEMORY_UC regions are quite different from EFI_MEMORY_WC regions,

>> and treating a region like memory (i.e., dereferencing pointers into it

>> or using ordinary CopyMem()/SetMem() functions on it) requires that it

>> be mapped with memory semantics, i.e., EFI_MEMORY_WC, EFI_MEMORY_WT or

>> EFI_MEMORY_WB.

>>

>> Since caching is not appropriate for regions where we rely on side

>> effects, remap the frame buffer EFI_MEMORY_WT. Given that the ARM

>> architecture requires device mappings to be non-executable (to avoid

>> inadvertent speculative instruction fetches from device registers),

>> retain the non-executable nature by adding the EFI_MEMORY_XP attribute

>> as well.

>>

>> Note that the crashes that this patch aims to prevent can currently only

>> occur under KVM, in which case the VGA driver does not operate correctly

>> in the first place. However, this is an implementation detail of QEMU

>> while running under KVM, and given that the ARM architecture simply does

>> not permit unaligned accesses to device memory, it is best to conform

>> to this in the code.

>>

>> Contributed-under: TianoCore Contribution Agreement 1.0

>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

>> ---

>>  OvmfPkg/QemuVideoDxe/Driver.c         |  5 +++++

>>  OvmfPkg/QemuVideoDxe/Gop.c            | 18 ++++++++++++++++--

>>  OvmfPkg/QemuVideoDxe/Qemu.h           |  2 ++

>>  OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf |  1 +

>>  4 files changed, 24 insertions(+), 2 deletions(-)

>>

>> diff --git a/OvmfPkg/QemuVideoDxe/Driver.c b/OvmfPkg/QemuVideoDxe/Driver.c

>> index 0dce80e59ba8..d81be49d91f3 100644

>> --- a/OvmfPkg/QemuVideoDxe/Driver.c

>> +++ b/OvmfPkg/QemuVideoDxe/Driver.c

>> @@ -69,6 +69,8 @@ QEMU_VIDEO_CARD gQemuVideoCardList[] = {

>>      }

>>  };

>>

>> +EFI_CPU_ARCH_PROTOCOL     *gCpu;

>> +

>>  static QEMU_VIDEO_CARD*

>>  QemuVideoDetect(

>>    IN UINT16 VendorId,

>> @@ -1103,5 +1105,8 @@ InitializeQemuVideo (

>>                    );

>>    ASSERT_EFI_ERROR (Status);

>>

>> +  Status = gBS->LocateProtocol (&gEfiCpuArchProtocolGuid, NULL, (VOID **)&gCpu);

>> +  ASSERT_EFI_ERROR(Status);

>> +

>>    return Status;

>>  }

>> diff --git a/OvmfPkg/QemuVideoDxe/Gop.c b/OvmfPkg/QemuVideoDxe/Gop.c

>> index 512fd27acbda..a820524db293 100644

>> --- a/OvmfPkg/QemuVideoDxe/Gop.c

>> +++ b/OvmfPkg/QemuVideoDxe/Gop.c

>> @@ -53,7 +53,8 @@ QemuVideoCompleteModeData (

>>  {

>>    EFI_GRAPHICS_OUTPUT_MODE_INFORMATION  *Info;

>>    EFI_ACPI_ADDRESS_SPACE_DESCRIPTOR     *FrameBufDesc;

>> -  QEMU_VIDEO_MODE_DATA           *ModeData;

>> +  QEMU_VIDEO_MODE_DATA                  *ModeData;

>> +  EFI_STATUS                            Status;

>>

>>    ModeData = &Private->ModeData[Mode->Mode];

>>    Info = Mode->Info;

>> @@ -72,8 +73,21 @@ QemuVideoCompleteModeData (

>>    DEBUG ((EFI_D_INFO, "FrameBufferBase: 0x%Lx, FrameBufferSize: 0x%Lx\n",

>>      Mode->FrameBufferBase, (UINT64)Mode->FrameBufferSize));

>>

>> +  //

>> +  // Remap the framebuffer region as write combining. On x86 systems, this is

>> +  // merely a performance optimization, but on ARM systems, it prevents

>> +  // crashes that may result from unaligned accesses, given that we treat the

>> +  // frame buffer as ordinary memory by using CopyMem()/SetMem() on it. While

>> +  // we're at it, set the non-exec attribute so the framebuffer is not

>> +  // exploitable by malware.

>> +  //

>> +  Status = gCpu->SetMemoryAttributes (gCpu, Mode->FrameBufferBase,

>> +                   ALIGN_VALUE (Mode->FrameBufferSize, EFI_PAGE_SIZE),

>> +                   EFI_MEMORY_WC | EFI_MEMORY_XP);

>> +  ASSERT_EFI_ERROR (Status);

>> +

>>    FreePool (FrameBufDesc);

>> -  return EFI_SUCCESS;

>> +  return Status;

>>  }

>>

>>  STATIC

>> diff --git a/OvmfPkg/QemuVideoDxe/Qemu.h b/OvmfPkg/QemuVideoDxe/Qemu.h

>> index 7fbb25b3efd3..2966c77c78b3 100644

>> --- a/OvmfPkg/QemuVideoDxe/Qemu.h

>> +++ b/OvmfPkg/QemuVideoDxe/Qemu.h

>> @@ -21,6 +21,7 @@

>>

>>

>>  #include <Uefi.h>

>> +#include <Protocol/Cpu.h>

>>  #include <Protocol/GraphicsOutput.h>

>>  #include <Protocol/PciIo.h>

>>  #include <Protocol/DriverSupportedEfiVersion.h>

>> @@ -164,6 +165,7 @@ extern EFI_DRIVER_BINDING_PROTOCOL                gQemuVideoDriverBinding;

>>  extern EFI_COMPONENT_NAME_PROTOCOL                gQemuVideoComponentName;

>>  extern EFI_COMPONENT_NAME2_PROTOCOL               gQemuVideoComponentName2;

>>  extern EFI_DRIVER_SUPPORTED_EFI_VERSION_PROTOCOL  gQemuVideoDriverSupportedEfiVersion;

>> +extern EFI_CPU_ARCH_PROTOCOL                      *gCpu;

>>

>>  //

>>  // Io Registers defined by VGA

>> diff --git a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

>> index 346a5aed94fa..bbe11257c002 100644

>> --- a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

>> +++ b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf

>> @@ -68,6 +68,7 @@ [LibraryClasses]

>>    UefiLib

>>

>>  [Protocols]

>> +  gEfiCpuArchProtocolGuid                       # PROTOCOL ALWAYS_CONSUMED

>>    gEfiDriverSupportedEfiVersionProtocolGuid     # PROTOCOL ALWAYS_PRODUCED

>>    gEfiGraphicsOutputProtocolGuid                # PROTOCOL BY_START

>>    gEfiDevicePathProtocolGuid                    # PROTOCOL BY_START

>>

>

> (1) When we added VirtioGpuDxe to the ArmVirtPkg platforms, the only

> reason I didn't propose removing QemuVideoDxe from the same platforms

> was that QemuVideoDxe was usable on QEMU/TCG, and I figured it wouldn't

> hurt to keep it.

>

> Other than that, I see zero point in using this driver on ARM. (And,

> apparently, it does hurt to keep it.)

>

> Can we please consider simply removing this driver from the ArmVirtPkg

> platforms? (And then some now-conditional compilation could be

> simplified in the driver too!)

>


It is actually quite useful in TCG mode, and the fact that QEMU
currently allows unaligned accesses to device memory is not something
we should be relying upon.

>

> (2) If there is an agreement that the suggested changes are required (or

> "more correct") for x86 as well, then I don't think we should directly

> access the CPU arch protocol.

>

> While it is true that any UEFI driver has a built-in depex on all the

> *UEFI* architectural protocols, the CPU arch protocol is a PI/DXE arch

> protocol, not a UEFI one.

>

> (2a) Instead, the PciIo protocol has member functions like

> GetBarAttributes() and SetBarAttributes(). The former can retrieve what

> attributes a BAR supports -- and this member func is already called just

> above the addition in QemuVideoCompleteModeData() --, whereas the latter

> member can apply those attributes. The attributes I have in mind are

> EFI_PCI_IO_ATTRIBUTE_MEMORY_WRITE_COMBINE and/or

> EFI_PCI_IO_ATTRIBUTE_MEMORY_CACHED. And, the "executable or not"

> question should be handled by the PciIo implementation internally (all

> BARs should be mapped noexec automatically), so we shouldn't have to

> care about that.

>

> ... On the other hand, I see comments in PciIoSetBarAttributes()

> [MdeModulePkg/Bus/Pci/PciBusDxe/PciIo.c] that make me doubt this

> approach would be viable in practice. I guess it should be tested.

>


Thanks for digging that up. I think we should fix the PCI bus driver
if it does not currently support this

> (2b) Even if we can't use PciIo for this, I feel that we should still

> use the highest-level service that we can, and then I would recommend

> gDS->SetMemorySpaceAttributes(). Note that the CPU arch protocol is

> introduced like this in the PI spec:

>

>     Abstracts the processor services that are required to implement some

>     of the DXE services. This protocol must be produced by a boot

>     service or runtime DXE driver and may only be consumed by the DXE

>     Foundation and DXE drivers that produce architectural protocols.

>

> IOW, if we decide to explicitly depend on PI/DXE in this UEFI driver,

> then at least go through the most abstract DXE service that applies.

>

>

> (3) More closely regarding the actual patch:

>

> The memory attributes are massaged in QemuVideoCompleteModeData(), but

> not in QemuVideoVmwareSvgaCompleteModeData(). (Both are called from

> QemuVideoGraphicsOutputSetMode(), dependent on the QEMU video card

> model.) I believe the omission is unintended.

>

> (If you agree that this is becoming messy and hard to test, especially

> on aarch64/KVM, then please see my point (1).)

>

>

> (4) I always consider the #inclusion of Protocol and Guid headers

> necessary for compilation, and the addition of the same to [Protocols]

> and [Guids] in the INF file necessary for linking.

>

> Modifying the INF file might (more recently) pull the necessary GUID

> declarations and initializer macros into AutoGen.h as well. But, the

> same definitely doesn't work with libraries (i.e., if you add a lib

> class to [LibraryClasses], you won't get the lib class header #included

> automatically!). So, for consistency, I always do both the #include and

> the INF modification.

>


OK

>

> (5) I don't always insist on very long comment blocks :)

>

>

> (6) Please update your edk2 git signoff template to "TianoCore

> Contribution Agreement 1.1" (from "1.0").

>


Sure, thanks for reminding me
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Ard Biesheuvel Aug. 22, 2017, 2:31 p.m. | #11
[...]
>>

>> (1) When we added VirtioGpuDxe to the ArmVirtPkg platforms, the only

>> reason I didn't propose removing QemuVideoDxe from the same platforms

>> was that QemuVideoDxe was usable on QEMU/TCG, and I figured it wouldn't

>> hurt to keep it.

>>

>> Other than that, I see zero point in using this driver on ARM. (And,

>> apparently, it does hurt to keep it.)

>>

>> Can we please consider simply removing this driver from the ArmVirtPkg

>> platforms? (And then some now-conditional compilation could be

>> simplified in the driver too!)

>>

>

> It is actually quite useful in TCG mode, and the fact that QEMU

> currently allows unaligned accesses to device memory is not something

> we should be relying upon.

>


Actually, I managed to confuse myself here. The only thing lacking
when running with virtio-gpu rather than VGA is efifb support, due to
the fact that the framebuffer is no longer directly addressable. efifb
is a useful hack on bare metal systems that lack a real framebuffer
driver, but it is hardly something to care deeply about on VMs.

So I am going to change my mind, and agree with Laszlo: let's remove
QemuVideoDxe from ArmVirtQemu; the longer we wait, the more difficult
it becomes, and only TCG users that rely on a GOP protocol being
exposed with direct framebuffer access are going to be affected in the
first place (if any such use cases exist)

Laszlo: any ideas or suggestions you may want to share before I start
working on this?

-- 
Ard.
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel
Laszlo Ersek Aug. 22, 2017, 3:44 p.m. | #12
On 08/22/17 16:31, Ard Biesheuvel wrote:
> [...]

>>>

>>> (1) When we added VirtioGpuDxe to the ArmVirtPkg platforms, the only

>>> reason I didn't propose removing QemuVideoDxe from the same platforms

>>> was that QemuVideoDxe was usable on QEMU/TCG, and I figured it wouldn't

>>> hurt to keep it.

>>>

>>> Other than that, I see zero point in using this driver on ARM. (And,

>>> apparently, it does hurt to keep it.)

>>>

>>> Can we please consider simply removing this driver from the ArmVirtPkg

>>> platforms? (And then some now-conditional compilation could be

>>> simplified in the driver too!)

>>>

>>

>> It is actually quite useful in TCG mode, and the fact that QEMU

>> currently allows unaligned accesses to device memory is not something

>> we should be relying upon.

>>

> 

> Actually, I managed to confuse myself here. The only thing lacking

> when running with virtio-gpu rather than VGA is efifb support, due to

> the fact that the framebuffer is no longer directly addressable. efifb

> is a useful hack on bare metal systems that lack a real framebuffer

> driver, but it is hardly something to care deeply about on VMs.


(Some side thoughts:

The "virtiodrmfb" driver in the Linux guest works fine with
virtio-gpu-pci. It does not activate itself as soon into the boot
process as efifb, but it does provide a fully functional character
console. And X11 works fine too, with the kernel modesetting driver.

Even if efifb had a chance to work (inheriting a framebuffer), it would
be supplanted during boot really quickly by virtiodrmfb. So the only
consequence of "efifb" not working is that the character console comes
to life a bit later, and some early messages are not shown on the VT.)

> So I am going to change my mind, and agree with Laszlo: let's remove

> QemuVideoDxe from ArmVirtQemu; the longer we wait, the more difficult

> it becomes, and only TCG users that rely on a GOP protocol being

> exposed with direct framebuffer access are going to be affected in the

> first place (if any such use cases exist)


I expect this set of users to be empty already.

> Laszlo: any ideas or suggestions you may want to share before I start

> working on this?


Just the above musings. Please go ahead with the QemuVideoDxe removal!

Thank you!
Laszlo
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel

Patch

diff --git a/OvmfPkg/QemuVideoDxe/Driver.c b/OvmfPkg/QemuVideoDxe/Driver.c
index 0dce80e59ba8..d81be49d91f3 100644
--- a/OvmfPkg/QemuVideoDxe/Driver.c
+++ b/OvmfPkg/QemuVideoDxe/Driver.c
@@ -69,6 +69,8 @@  QEMU_VIDEO_CARD gQemuVideoCardList[] = {
     }
 };
 
+EFI_CPU_ARCH_PROTOCOL     *gCpu;
+
 static QEMU_VIDEO_CARD*
 QemuVideoDetect(
   IN UINT16 VendorId,
@@ -1103,5 +1105,8 @@  InitializeQemuVideo (
                   );
   ASSERT_EFI_ERROR (Status);
 
+  Status = gBS->LocateProtocol (&gEfiCpuArchProtocolGuid, NULL, (VOID **)&gCpu);
+  ASSERT_EFI_ERROR(Status);
+
   return Status;
 }
diff --git a/OvmfPkg/QemuVideoDxe/Gop.c b/OvmfPkg/QemuVideoDxe/Gop.c
index 512fd27acbda..a820524db293 100644
--- a/OvmfPkg/QemuVideoDxe/Gop.c
+++ b/OvmfPkg/QemuVideoDxe/Gop.c
@@ -53,7 +53,8 @@  QemuVideoCompleteModeData (
 {
   EFI_GRAPHICS_OUTPUT_MODE_INFORMATION  *Info;
   EFI_ACPI_ADDRESS_SPACE_DESCRIPTOR     *FrameBufDesc;
-  QEMU_VIDEO_MODE_DATA           *ModeData;
+  QEMU_VIDEO_MODE_DATA                  *ModeData;
+  EFI_STATUS                            Status;
 
   ModeData = &Private->ModeData[Mode->Mode];
   Info = Mode->Info;
@@ -72,8 +73,21 @@  QemuVideoCompleteModeData (
   DEBUG ((EFI_D_INFO, "FrameBufferBase: 0x%Lx, FrameBufferSize: 0x%Lx\n",
     Mode->FrameBufferBase, (UINT64)Mode->FrameBufferSize));
 
+  //
+  // Remap the framebuffer region as write combining. On x86 systems, this is
+  // merely a performance optimization, but on ARM systems, it prevents
+  // crashes that may result from unaligned accesses, given that we treat the
+  // frame buffer as ordinary memory by using CopyMem()/SetMem() on it. While
+  // we're at it, set the non-exec attribute so the framebuffer is not
+  // exploitable by malware.
+  //
+  Status = gCpu->SetMemoryAttributes (gCpu, Mode->FrameBufferBase,
+                   ALIGN_VALUE (Mode->FrameBufferSize, EFI_PAGE_SIZE),
+                   EFI_MEMORY_WC | EFI_MEMORY_XP);
+  ASSERT_EFI_ERROR (Status);
+
   FreePool (FrameBufDesc);
-  return EFI_SUCCESS;
+  return Status;
 }
 
 STATIC
diff --git a/OvmfPkg/QemuVideoDxe/Qemu.h b/OvmfPkg/QemuVideoDxe/Qemu.h
index 7fbb25b3efd3..2966c77c78b3 100644
--- a/OvmfPkg/QemuVideoDxe/Qemu.h
+++ b/OvmfPkg/QemuVideoDxe/Qemu.h
@@ -21,6 +21,7 @@ 
 
 
 #include <Uefi.h>
+#include <Protocol/Cpu.h>
 #include <Protocol/GraphicsOutput.h>
 #include <Protocol/PciIo.h>
 #include <Protocol/DriverSupportedEfiVersion.h>
@@ -164,6 +165,7 @@  extern EFI_DRIVER_BINDING_PROTOCOL                gQemuVideoDriverBinding;
 extern EFI_COMPONENT_NAME_PROTOCOL                gQemuVideoComponentName;
 extern EFI_COMPONENT_NAME2_PROTOCOL               gQemuVideoComponentName2;
 extern EFI_DRIVER_SUPPORTED_EFI_VERSION_PROTOCOL  gQemuVideoDriverSupportedEfiVersion;
+extern EFI_CPU_ARCH_PROTOCOL                      *gCpu;
 
 //
 // Io Registers defined by VGA
diff --git a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf
index 346a5aed94fa..bbe11257c002 100644
--- a/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf
+++ b/OvmfPkg/QemuVideoDxe/QemuVideoDxe.inf
@@ -68,6 +68,7 @@  [LibraryClasses]
   UefiLib
 
 [Protocols]
+  gEfiCpuArchProtocolGuid                       # PROTOCOL ALWAYS_CONSUMED
   gEfiDriverSupportedEfiVersionProtocolGuid     # PROTOCOL ALWAYS_PRODUCED
   gEfiGraphicsOutputProtocolGuid                # PROTOCOL BY_START
   gEfiDevicePathProtocolGuid                    # PROTOCOL BY_START