[Xen-devel] xen/arm: Alternative start of day cache coherency

Message ID 1390408531.32519.78.camel@kazak.uk.xensource.com
State New
Headers show

Commit Message

Ian Campbell Jan. 22, 2014, 4:35 p.m.
Julien,

I wonder if the following is any better than the current stuff in
staging for the issue you are seeing with BSD at start of day? Can you
try it please.

It has survived >1000 bootloops on Midway and >50 on Mustang, both are
still going.

It basically does a cache clean on all RAM mapped in the p2m. Anything
in the cache is either the result of an earlier scrub of the page or
something toolstack just wrote, so there is no need to be concerned
about clean vs. invalidate -- clean is always correct.

This should ensure that the guest has no dirty pages when it starts.
This nobbles the HCR_DC based stuff, too since it is no longer
necessary. This avoids concerns about guests which enable MMU before
caches.

It contains debug BUG()s in various trap locations to trap if the guest
experiences any incoherence and has lots of other debugging left in etc.
(and hacks wrt prototypes not in headers)

Ian.

Comments

Ian Campbell Jan. 23, 2014, 9:33 a.m. | #1
On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > +{
> > +    DECLARE_DOMCTL;
> > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > +    domctl.domain = (domid_t)domid;
> > +    domctl.u.cacheflush.start_mfn = 0;
> > +    return do_domctl(xch, &domctl);
> > +}
> 
> Do we really need to flush the entire p2m, or just things we have
> written to?

I think we need to flush everything (well, all RAM backed pages, the
patch skips everything else).

Even things which we haven't explicitly written to will have been
scrubbed and therefore have scrubbed data in the cache but data
belonging to the previous owner in actual RAM. So we would really want
to clean in that case too.

We could do the clean at scrub time which arguably be better anyway and
would potentially allows us to only invalidate instead of clean
+invalidate some subset of pages, but we would need to track which sort
of page was which -- e.g. with a special p2m type for a page which had
been foreign mapped or some other bit of metadata.

> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > index 85ca330..f35ed57 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -228,15 +228,26 @@ enum p2m_operation {
> >      ALLOCATE,
> >      REMOVE,
> >      RELINQUISH,
> > +    CACHEFLUSH,
> >  };
> >  
> > +static void do_one_cacheflush(paddr_t mfn)
> > +{
> > +    void *v = map_domain_page(mfn);
> > +
> > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > +
> > +    unmap_domain_page(v);
> > +}
> 
> A pity that we need to map a page just to flush the dcache.  It could be
> expensive, especially if we really have to map every single guest mfn.

Remember that this is basically free for arm64 and for arm32 we actually
map 2MB regions and cache, so it is only actually one map per 2MB
region.

> I wonder if we could use DCCSW instead.

There is no way to use this by VMID so we would have to blow the entire
cache. DCCSW is also very tricky to use in an SMP system, you might need
to do some sort of stop-machine trick (although perhaps for this use
case we know the tools in dom0 only has a single thread touching the
foreign memory and the guest itself isn't running). I'd need to think
very carefully about that case but since it involves flushing the entire
cache I'm not inclined to go down that path in the first place.

> > +        case RELINQUISH:
> > +        case CACHEFLUSH:
> > +            if (count >= 0x2000 && hypercall_preempt_check() )
> >              {
> >                  p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
> 
> If we are taking this code path for cache flushes, then we should rename
> next_gfn_to_relinquish to something more generic.

Yes, this was just a proof of concept so I didn't bother, but really
this is minimum_mapped_p2m.

create_p2m_entries should also be walk_p2m_entries or some such.

Ian.
Ian Campbell Jan. 23, 2014, 9:56 a.m. | #2
On Wed, 2014-01-22 at 19:06 +0000, Julien Grall wrote:
> On 01/22/2014 04:35 PM, Ian Campbell wrote:
> > Julien,
> 
> Hi Ian,
> 
> > I wonder if the following is any better than the current stuff in
> > staging for the issue you are seeing with BSD at start of day? Can you
> > try it please.
> 
> Thanks for the patch! It allows me to boot FreeBSD correctly (ie with
> Write-Through for the first page table) on Midway.

Perfect. I'm inclined to put clean this up and put it forward for 4.4
then.

> > It has survived >1000 bootloops on Midway and >50 on Mustang, both are
> > still going.
> > 
> > It basically does a cache clean on all RAM mapped in the p2m. Anything
> > in the cache is either the result of an earlier scrub of the page or
> > something toolstack just wrote, so there is no need to be concerned
> > about clean vs. invalidate -- clean is always correct.
> 
> I don't remember what was the conclusion... is it necessary to flush all
> the RAM? Flushing the Kernel/initrd/DTB space should be enough.

See my reply to Stefano -- we need to be concerned about scrubbed page
data in the cache which is masking actual data from the previous owner.

Ian.
Stefano Stabellini Jan. 23, 2014, 1:55 p.m. | #3
On Thu, 23 Jan 2014, Ian Campbell wrote:
> On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > +{
> > > +    DECLARE_DOMCTL;
> > > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > > +    domctl.domain = (domid_t)domid;
> > > +    domctl.u.cacheflush.start_mfn = 0;
> > > +    return do_domctl(xch, &domctl);
> > > +}
> > 
> > Do we really need to flush the entire p2m, or just things we have
> > written to?
> 
> I think we need to flush everything (well, all RAM backed pages, the
> patch skips everything else).
> 
> Even things which we haven't explicitly written to will have been
> scrubbed and therefore have scrubbed data in the cache but data
> belonging to the previous owner in actual RAM. So we would really want
> to clean in that case too.
> 
> We could do the clean at scrub time which arguably be better anyway and
> would potentially allows us to only invalidate instead of clean
> +invalidate some subset of pages, but we would need to track which sort
> of page was which -- e.g. with a special p2m type for a page which had
> been foreign mapped or some other bit of metadata.

That seems like the way to go.


> > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > index 85ca330..f35ed57 100644
> > > --- a/xen/arch/arm/p2m.c
> > > +++ b/xen/arch/arm/p2m.c
> > > @@ -228,15 +228,26 @@ enum p2m_operation {
> > >      ALLOCATE,
> > >      REMOVE,
> > >      RELINQUISH,
> > > +    CACHEFLUSH,
> > >  };
> > >  
> > > +static void do_one_cacheflush(paddr_t mfn)
> > > +{
> > > +    void *v = map_domain_page(mfn);
> > > +
> > > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > > +
> > > +    unmap_domain_page(v);
> > > +}
> > 
> > A pity that we need to map a page just to flush the dcache.  It could be
> > expensive, especially if we really have to map every single guest mfn.
> 
> Remember that this is basically free for arm64 and for arm32 we actually
> map 2MB regions and cache, so it is only actually one map per 2MB
> region.

Even with 2MB at a time it is easy for this to become really slow. A 4GB
guest would need 2048 iterations of map/flush/unmap. I don't have any
numbers but I bet they won't look good.
At least if it was combined with the RAM scrub we would save the 2048
map/unmap.
Ian Campbell Jan. 23, 2014, 2:01 p.m. | #4
On Thu, 2014-01-23 at 13:55 +0000, Stefano Stabellini wrote:
> On Thu, 23 Jan 2014, Ian Campbell wrote:
> > On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > > +{
> > > > +    DECLARE_DOMCTL;
> > > > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > > > +    domctl.domain = (domid_t)domid;
> > > > +    domctl.u.cacheflush.start_mfn = 0;
> > > > +    return do_domctl(xch, &domctl);
> > > > +}
> > > 
> > > Do we really need to flush the entire p2m, or just things we have
> > > written to?
> > 
> > I think we need to flush everything (well, all RAM backed pages, the
> > patch skips everything else).
> > 
> > Even things which we haven't explicitly written to will have been
> > scrubbed and therefore have scrubbed data in the cache but data
> > belonging to the previous owner in actual RAM. So we would really want
> > to clean in that case too.
> > 
> > We could do the clean at scrub time which arguably be better anyway and
> > would potentially allows us to only invalidate instead of clean
> > +invalidate some subset of pages, but we would need to track which sort
> > of page was which -- e.g. with a special p2m type for a page which had
> > been foreign mapped or some other bit of metadata.
> 
> That seems like the way to go.

I'm not convinced actually, and in any case, not for 4.4...

> > > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > > index 85ca330..f35ed57 100644
> > > > --- a/xen/arch/arm/p2m.c
> > > > +++ b/xen/arch/arm/p2m.c
> > > > @@ -228,15 +228,26 @@ enum p2m_operation {
> > > >      ALLOCATE,
> > > >      REMOVE,
> > > >      RELINQUISH,
> > > > +    CACHEFLUSH,
> > > >  };
> > > >  
> > > > +static void do_one_cacheflush(paddr_t mfn)
> > > > +{
> > > > +    void *v = map_domain_page(mfn);
> > > > +
> > > > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > > > +
> > > > +    unmap_domain_page(v);
> > > > +}
> > > 
> > > A pity that we need to map a page just to flush the dcache.  It could be
> > > expensive, especially if we really have to map every single guest mfn.
> > 
> > Remember that this is basically free for arm64 and for arm32 we actually
> > map 2MB regions and cache, so it is only actually one map per 2MB
> > region.
> 
> Even with 2MB at a time it is easy for this to become really slow. A 4GB
> guest would need 2048 iterations of map/flush/unmap. I don't have any
> numbers but I bet they won't look good.

It happens once during domain build and the delay it isn't noticeable to
me compared with the rest of the build process.

> At least if it was combined with the RAM scrub we would save the 2048
> map/unmap.

The scrub has to map it in exactly the same way.

Ian.
Stefano Stabellini Jan. 23, 2014, 2:56 p.m. | #5
On Thu, 23 Jan 2014, Ian Campbell wrote:
> On Thu, 2014-01-23 at 13:55 +0000, Stefano Stabellini wrote:
> > On Thu, 23 Jan 2014, Ian Campbell wrote:
> > > On Wed, 2014-01-22 at 17:32 +0000, Stefano Stabellini wrote:
> > > > > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > > > > +{
> > > > > +    DECLARE_DOMCTL;
> > > > > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > > > > +    domctl.domain = (domid_t)domid;
> > > > > +    domctl.u.cacheflush.start_mfn = 0;
> > > > > +    return do_domctl(xch, &domctl);
> > > > > +}
> > > > 
> > > > Do we really need to flush the entire p2m, or just things we have
> > > > written to?
> > > 
> > > I think we need to flush everything (well, all RAM backed pages, the
> > > patch skips everything else).
> > > 
> > > Even things which we haven't explicitly written to will have been
> > > scrubbed and therefore have scrubbed data in the cache but data
> > > belonging to the previous owner in actual RAM. So we would really want
> > > to clean in that case too.
> > > 
> > > We could do the clean at scrub time which arguably be better anyway and
> > > would potentially allows us to only invalidate instead of clean
> > > +invalidate some subset of pages, but we would need to track which sort
> > > of page was which -- e.g. with a special p2m type for a page which had
> > > been foreign mapped or some other bit of metadata.
> > 
> > That seems like the way to go.
> 
> I'm not convinced actually, and in any case, not for 4.4...
> 
> > > > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > > > index 85ca330..f35ed57 100644
> > > > > --- a/xen/arch/arm/p2m.c
> > > > > +++ b/xen/arch/arm/p2m.c
> > > > > @@ -228,15 +228,26 @@ enum p2m_operation {
> > > > >      ALLOCATE,
> > > > >      REMOVE,
> > > > >      RELINQUISH,
> > > > > +    CACHEFLUSH,
> > > > >  };
> > > > >  
> > > > > +static void do_one_cacheflush(paddr_t mfn)
> > > > > +{
> > > > > +    void *v = map_domain_page(mfn);
> > > > > +
> > > > > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > > > > +
> > > > > +    unmap_domain_page(v);
> > > > > +}
> > > > 
> > > > A pity that we need to map a page just to flush the dcache.  It could be
> > > > expensive, especially if we really have to map every single guest mfn.
> > > 
> > > Remember that this is basically free for arm64 and for arm32 we actually
> > > map 2MB regions and cache, so it is only actually one map per 2MB
> > > region.
> > 
> > Even with 2MB at a time it is easy for this to become really slow. A 4GB
> > guest would need 2048 iterations of map/flush/unmap. I don't have any
> > numbers but I bet they won't look good.
> 
> It happens once during domain build and the delay it isn't noticeable to
> me compared with the rest of the build process.
> 
> > At least if it was combined with the RAM scrub we would save the 2048
> > map/unmap.
> 
> The scrub has to map it in exactly the same way.

Right, since we are already doing it once, why do it twice? :)
Ian Campbell Jan. 23, 2014, 3 p.m. | #6
On Thu, 2014-01-23 at 14:56 +0000, Stefano Stabellini wrote:
> On Thu, 23 Jan 2014, Ian Campbell wrote:
> > On Thu, 2014-01-23 at 13:55 +0000, Stefano Stabellini wrote:
> > > At least if it was combined with the RAM scrub we would save the 2048
> > > map/unmap.
> > 
> > The scrub has to map it in exactly the same way.
> 
> Right, since we are already doing it once, why do it twice? :)

Because you need a load of other infrastructure (to track clean vs dirty
guest pages) too for it to be of any benefit.

Ian.

Patch

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index e1d1bec..60c3091 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,14 @@  int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_mfn = 0;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..43dae5c 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -453,6 +453,7 @@  int xc_domain_create(xc_interface *xch,
                      xen_domain_handle_t handle,
                      uint32_t flags,
                      uint32_t *pdomid);
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid);
 
 
 /* Functions to produce a dump of a given domain
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..55c86f0 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1364,7 +1364,10 @@  static void domain_create_cb(libxl__egc *egc,
     STATE_AO_GC(cdcs->dcs.ao);
 
     if (!rc)
+    {
         *cdcs->domid_out = domid;
+        xc_domain_cacheflush(CTX->xch, domid);
+    }
 
     libxl__ao_complete(egc, ao, rc);
 }
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..2edd09d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -475,7 +475,8 @@  int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
+    //v->arch.default_cache = true;
+    v->arch.default_cache = false;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..9e3b37d 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,12 +11,24 @@ 
 #include <xen/sched.h>
 #include <xen/hypercall.h>
 #include <public/domctl.h>
+#include <xen/guest_access.h>
+
+extern long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn);
 
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
+        if ( __copy_to_guest(u_domctl, domctl, 1) )
+            rc = -EFAULT;
+
+        return rc;
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..f35ed57 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -228,15 +228,26 @@  enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
+static void do_one_cacheflush(paddr_t mfn)
+{
+    void *v = map_domain_page(mfn);
+
+    flush_xen_dcache_va_range(v, PAGE_SIZE);
+
+    unmap_domain_page(v);
+}
+
 static int create_p2m_entries(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
                      paddr_t maddr,
                      int mattr,
-                     p2m_type_t t)
+                     p2m_type_t t,
+                     xen_pfn_t *last_mfn)
 {
     int rc;
     struct p2m_domain *p2m = &d->arch.p2m;
@@ -381,18 +392,42 @@  static int create_p2m_entries(struct domain *d,
                     count++;
                 }
                 break;
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                    {
+                        count++;
+                        break;
+                    }
+
+                    count += 0x10;
+
+                    do_one_cacheflush(pte.p2m.base);
+                }
+                break;
         }
 
+        if ( last_mfn )
+            *last_mfn = addr >> PAGE_SHIFT;
+
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
-        if ( op == RELINQUISH && count >= 0x2000 )
+        switch ( op )
         {
-            if ( hypercall_preempt_check() )
+        case RELINQUISH:
+        case CACHEFLUSH:
+            if (count >= 0x2000 && hypercall_preempt_check() )
             {
                 p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
             count = 0;
+            break;
+        case INSERT:
+        case ALLOCATE:
+        case REMOVE:
+            /* No preemption */
+            break;
         }
 
         /* Got the next page */
@@ -439,7 +474,7 @@  int p2m_populate_ram(struct domain *d,
                      paddr_t end)
 {
     return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+                              0, MATTR_MEM, p2m_ram_rw, NULL);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -448,7 +483,7 @@  int map_mmio_regions(struct domain *d,
                      paddr_t maddr)
 {
     return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+                              maddr, MATTR_DEV, p2m_mmio_direct, NULL);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -460,7 +495,7 @@  int guest_physmap_add_entry(struct domain *d,
     return create_p2m_entries(d, INSERT,
                               pfn_to_paddr(gpfn),
                               pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+                              pfn_to_paddr(mfn), MATTR_MEM, t, NULL);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -470,7 +505,7 @@  void guest_physmap_remove_page(struct domain *d,
     create_p2m_entries(d, REMOVE,
                        pfn_to_paddr(gpfn),
                        pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid, NULL);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -622,7 +657,28 @@  int relinquish_p2m_mapping(struct domain *d)
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-                              MATTR_MEM, p2m_invalid);
+                              MATTR_MEM, p2m_invalid, NULL);
+}
+
+long p2m_cache_flush(struct domain *d, xen_pfn_t *start_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    printk("dom%d p2m cache flush from mfn %"PRI_xen_pfn" RELIN %lx\n",
+           d->domain_id, *start_mfn, p2m->next_gfn_to_relinquish);
+
+    *start_mfn = MAX(*start_mfn, p2m->next_gfn_to_relinquish);
+
+    printk("dom%d p2m cache flush: %"PRIpaddr"-%"PRIpaddr"\n",
+           d->domain_id,
+           pfn_to_paddr(*start_mfn),
+           pfn_to_paddr(p2m->max_mapped_gfn));
+
+    return create_p2m_entries(d, CACHEFLUSH,
+                              pfn_to_paddr(*start_mfn),
+                              pfn_to_paddr(p2m->max_mapped_gfn),
+                              pfn_to_paddr(INVALID_MFN),
+                              MATTR_MEM, p2m_invalid, start_mfn);
 }
 
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 48a6fcc..546f7ce 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1283,6 +1283,7 @@  static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
 
 static void update_sctlr(struct vcpu *v, uint32_t val)
 {
+    BUG();
     /*
      * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
      * because they are incompatible.
@@ -1628,6 +1629,7 @@  static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
     register_t addr = READ_SYSREG(FAR_EL2);
+    BUG();
     inject_iabt_exception(regs, addr, hsr.len);
 }
 
@@ -1683,6 +1685,8 @@  static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     }
 
 bad_data_abort:
+    show_execution_state(regs);
+    panic("DABT");
     inject_dabt_exception(regs, info.gva, hsr.len);
 }
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..d7b22c3 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,13 @@  struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_cacheflush {
+    /* Updated for progress */
+    xen_pfn_t start_mfn;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +961,7 @@  struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1020,7 @@  struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];