diff mbox series

net/mlx5: remove unwanted barrier

Message ID 20210606164948.35997-1-honnappa.nagarahalli@arm.com
State New
Headers show
Series net/mlx5: remove unwanted barrier | expand

Commit Message

Honnappa Nagarahalli June 6, 2021, 4:49 p.m. UTC
The IO barrier is not required as cqe->op_own is read once. The
checks done on the local variable and the memory is not read again.

Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error handling")
Cc: matan@mellanox.com
Cc: stable@dpdk.org

Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>

Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>

---
 drivers/common/mlx5/mlx5_common.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

-- 
2.17.1

Comments

Slava Ovsiienko July 1, 2021, 12:52 p.m. UTC | #1
Hi, Honnappa

The rte_io_rmb() was inserted not to prevent the extra access to cqe->op_own
(the volatile qualifier is quite enough, if we had some doubts, we would insert rte_compiler_barrier),
but the real intention of io_rmw was to isolate cqe->op_own loads on hardware level.

cqe points to the Completion Queue Entry (CQE), that is mapped to the memory that is continuously
being updated by the device (NIC). CQE is 64B size structure and op_own is located at the structure end,
and is updated by HW in last order, after the entire CQE is completely written to the host memory.

After detecting by cqe_check() the CQE is owned by software (hardware completed operation)
the PMD starts touching other CQE fields, i.e. the next load transactions from CQE are triggered.
And we must make sure these loads happen in correct order, only if cqe->op_own load was completed and
valid ownership flags were seen, i.e. - do not allow speculative reads with possible incorrect values fetched).

Just hypothetical case (I agree in advance - it is very unlikely, but is not impossible :)):

1. owner = cqe->op_own - load A triggered
2. some code is being speculatively executed, no barrier
3. length = cqe->length - load B triggered
4. Let's suppose CPU reordered A and B, ie order of loads: B, A
5. In memory/CPU cache we have old CQE, owned by HW
6. B load gets the old length value (invalid)
7. Hardware writes the new CQE and CPU cache is invalidated
8. A load gets the CQE is owned by SW and the invalid results of load B will be used by PMD

Hence, I would consider the patch as risky, and as one that is extremely hard to be covered completely with tests -
we should test for race conditions on multiple architectures, on many CPU models, PCIe root complexes, etc.

With best regards,
Slava

> -----Original Message-----

> From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>

> Sent: Sunday, June 6, 2021 19:50

> To: dev@dpdk.org; honnappa.nagarahalli@arm.com; Matan Azrad

> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko

> <viacheslavo@nvidia.com>

> Cc: ruifeng.wang@arm.com; Matan Azrad <matan@nvidia.com>;

> stable@dpdk.org

> Subject: [PATCH] net/mlx5: remove unwanted barrier

> 

> The IO barrier is not required as cqe->op_own is read once. The checks done on

> the local variable and the memory is not read again.

> 

> Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error handling")

> Cc: matan@mellanox.com

> Cc: stable@dpdk.org

> 

> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>

> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>

> ---

>  drivers/common/mlx5/mlx5_common.h | 2 +-

>  1 file changed, 1 insertion(+), 1 deletion(-)

> 

> diff --git a/drivers/common/mlx5/mlx5_common.h

> b/drivers/common/mlx5/mlx5_common.h

> index 5028a05b49..a4c29f51f1 100644

> --- a/drivers/common/mlx5/mlx5_common.h

> +++ b/drivers/common/mlx5/mlx5_common.h

> @@ -195,7 +195,7 @@ check_cqe(volatile struct mlx5_cqe *cqe, const

> uint16_t cqes_n,

> 

>  	if (unlikely((op_owner != (!!(idx))) || (op_code ==

> MLX5_CQE_INVALID)))

>  		return MLX5_CQE_STATUS_HW_OWN;

> -	rte_io_rmb();

> +

>  	if (unlikely(op_code == MLX5_CQE_RESP_ERR ||

>  		     op_code == MLX5_CQE_REQ_ERR))

>  		return MLX5_CQE_STATUS_ERR;

> --

> 2.17.1
Honnappa Nagarahalli July 2, 2021, 9:51 p.m. UTC | #2
Hi Slava,
	Thanks for taking the pain to explain this (this is similar to DD bits in Intel i40e PMD). Agree, that for ensuring the ordering of the loads between op_own and the rest of the fields we need a barrier. For Arm architecture, we can use the barrier for inner sharable domain (rte_smp_rmb()) which is less costly than the outer sharable domain (rte_io_rmb()). I will re-spin this patch.

Thanks,
Honnappa

> -----Original Message-----

> From: Slava Ovsiienko <viacheslavo@nvidia.com>

> Sent: Thursday, July 1, 2021 7:53 AM

> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;

> dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Shahaf Shuler

> <shahafs@nvidia.com>

> Cc: Ruifeng Wang <Ruifeng.Wang@arm.com>; Matan Azrad

> <matan@nvidia.com>; stable@dpdk.org

> Subject: RE: [PATCH] net/mlx5: remove unwanted barrier

> 

> Hi, Honnappa

> 

> The rte_io_rmb() was inserted not to prevent the extra access to cqe-

> >op_own (the volatile qualifier is quite enough, if we had some doubts, we

> would insert rte_compiler_barrier), but the real intention of io_rmw was to

> isolate cqe->op_own loads on hardware level.

> 

> cqe points to the Completion Queue Entry (CQE), that is mapped to the

> memory that is continuously being updated by the device (NIC). CQE is 64B

> size structure and op_own is located at the structure end, and is updated by

> HW in last order, after the entire CQE is completely written to the host

> memory.

> 

> After detecting by cqe_check() the CQE is owned by software (hardware

> completed operation) the PMD starts touching other CQE fields, i.e. the next

> load transactions from CQE are triggered.

> And we must make sure these loads happen in correct order, only if cqe-

> >op_own load was completed and valid ownership flags were seen, i.e. - do

> not allow speculative reads with possible incorrect values fetched).

> 

> Just hypothetical case (I agree in advance - it is very unlikely, but is not

> impossible :)):

> 

> 1. owner = cqe->op_own - load A triggered 2. some code is being speculatively

> executed, no barrier 3. length = cqe->length - load B triggered 4. Let's suppose

> CPU reordered A and B, ie order of loads: B, A 5. In memory/CPU cache we

> have old CQE, owned by HW 6. B load gets the old length value (invalid) 7.

> Hardware writes the new CQE and CPU cache is invalidated 8. A load gets the

> CQE is owned by SW and the invalid results of load B will be used by PMD

> 

> Hence, I would consider the patch as risky, and as one that is extremely hard

> to be covered completely with tests - we should test for race conditions on

> multiple architectures, on many CPU models, PCIe root complexes, etc.

> 

> With best regards,

> Slava

> 

> > -----Original Message-----

> > From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>

> > Sent: Sunday, June 6, 2021 19:50

> > To: dev@dpdk.org; honnappa.nagarahalli@arm.com; Matan Azrad

> > <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava

> > Ovsiienko <viacheslavo@nvidia.com>

> > Cc: ruifeng.wang@arm.com; Matan Azrad <matan@nvidia.com>;

> > stable@dpdk.org

> > Subject: [PATCH] net/mlx5: remove unwanted barrier

> >

> > The IO barrier is not required as cqe->op_own is read once. The checks

> > done on the local variable and the memory is not read again.

> >

> > Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error

> > handling")

> > Cc: matan@mellanox.com

> > Cc: stable@dpdk.org

> >

> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>

> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>

> > ---

> >  drivers/common/mlx5/mlx5_common.h | 2 +-

> >  1 file changed, 1 insertion(+), 1 deletion(-)

> >

> > diff --git a/drivers/common/mlx5/mlx5_common.h

> > b/drivers/common/mlx5/mlx5_common.h

> > index 5028a05b49..a4c29f51f1 100644

> > --- a/drivers/common/mlx5/mlx5_common.h

> > +++ b/drivers/common/mlx5/mlx5_common.h

> > @@ -195,7 +195,7 @@ check_cqe(volatile struct mlx5_cqe *cqe, const

> > uint16_t cqes_n,

> >

> >  	if (unlikely((op_owner != (!!(idx))) || (op_code ==

> > MLX5_CQE_INVALID)))

> >  		return MLX5_CQE_STATUS_HW_OWN;

> > -	rte_io_rmb();

> > +

> >  	if (unlikely(op_code == MLX5_CQE_RESP_ERR ||

> >  		     op_code == MLX5_CQE_REQ_ERR))

> >  		return MLX5_CQE_STATUS_ERR;

> > --

> > 2.17.1
Slava Ovsiienko Sept. 27, 2022, 6:34 a.m. UTC | #3
Hi, Honnappa

We discussed the barrier here:
http://patches.dpdk.org/project/dpdk/patch/20210606164948.35997-1-honnappa.nagarahalli@arm.com/

(BTW, it is good practice to keep the reference to previous patch versions below Commit Message of the next ones).

This barrier is not about compiler ordering, it is about external HW agent memory action completions.
So, I'm not sure the rte_atomic_thread_fence() is safe for x86 - patch impacts x86 as well.

With best regards,
Slava

> -----Original Message-----
> From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Sent: Tuesday, August 30, 2022 23:01
> To: dev@dpdk.org; honnappa.nagarahalli@arm.com; ruifeng.wang@arm.com; Matan
> Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>
> Cc: nd@arm.com; Matan Azrad <matan@nvidia.com>; stable@dpdk.org
> Subject: [PATCH v2] net/mlx5: use just sufficient barrier for Arm platforms
> 
> cqe->op_own indicates if the CQE is owned by the NIC. The rest of
> the fields in CQE should be read only after op_own is read. On Arm platforms
> using "dmb ishld" is sufficient to enforce this.
> 
> Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error handling")
> Cc: matan@mellanox.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
>  drivers/common/mlx5/mlx5_common.h | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/common/mlx5/mlx5_common.h
> b/drivers/common/mlx5/mlx5_common.h
> index 5028a05b49..ac2e85b15f 100644
> --- a/drivers/common/mlx5/mlx5_common.h
> +++ b/drivers/common/mlx5/mlx5_common.h
> @@ -195,7 +195,11 @@ check_cqe(volatile struct mlx5_cqe *cqe, const uint16_t
> cqes_n,
> 
>  	if (unlikely((op_owner != (!!(idx))) || (op_code ==
> MLX5_CQE_INVALID)))
>  		return MLX5_CQE_STATUS_HW_OWN;
> -	rte_io_rmb();
> +	/* Prevent speculative reading of other fields in CQE until
> +	 * CQE is valid.
> +	 */
> +	rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
> +
>  	if (unlikely(op_code == MLX5_CQE_RESP_ERR ||
>  		     op_code == MLX5_CQE_REQ_ERR))
>  		return MLX5_CQE_STATUS_ERR;
> --
> 2.17.1
Honnappa Nagarahalli Sept. 27, 2022, 9:03 p.m. UTC | #4
<snip>

> 
> Hi, Honnappa
Hi Slava, thanks for the feedback.

> 
> We discussed the barrier here:
> http://patches.dpdk.org/project/dpdk/patch/20210606164948.35997-1-
> honnappa.nagarahalli@arm.com/
Yes, I have changed the patch according to the discussion. i.e. barrier is needed, but different (inner sharable domain) barrier is required.

> 
> (BTW, it is good practice to keep the reference to previous patch versions
> below Commit Message of the next ones).
> 
> This barrier is not about compiler ordering, it is about external HW agent
> memory action completions.
> So, I'm not sure the rte_atomic_thread_fence() is safe for x86 - patch impacts
> x86 as well.
The earlier barrier 'rte_io_rmb()', resolves to a compiler barrier on x86 [1]. The rte_atomic_thread_fence(__ATOMIC_ACQUIRE) on x86 also acts as a compiler barrier. So, there is no change for x86.


[1] https://github.com/DPDK/dpdk/blob/main/lib/eal/x86/include/rte_atomic.h#L80

> 
> With best regards,
> Slava
> 
> > -----Original Message-----
> > From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Sent: Tuesday, August 30, 2022 23:01
> > To: dev@dpdk.org; honnappa.nagarahalli@arm.com;
> ruifeng.wang@arm.com;
> > Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
> > Slava Ovsiienko <viacheslavo@nvidia.com>
> > Cc: nd@arm.com; Matan Azrad <matan@nvidia.com>; stable@dpdk.org
> > Subject: [PATCH v2] net/mlx5: use just sufficient barrier for Arm
> > platforms
> >
> > cqe->op_own indicates if the CQE is owned by the NIC. The rest of
> > the fields in CQE should be read only after op_own is read. On Arm
> > platforms using "dmb ishld" is sufficient to enforce this.
> >
> > Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error
> > handling")
> > Cc: matan@mellanox.com
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > ---
> >  drivers/common/mlx5/mlx5_common.h | 6 +++++-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/common/mlx5/mlx5_common.h
> > b/drivers/common/mlx5/mlx5_common.h
> > index 5028a05b49..ac2e85b15f 100644
> > --- a/drivers/common/mlx5/mlx5_common.h
> > +++ b/drivers/common/mlx5/mlx5_common.h
> > @@ -195,7 +195,11 @@ check_cqe(volatile struct mlx5_cqe *cqe, const
> > uint16_t cqes_n,
> >
> >  	if (unlikely((op_owner != (!!(idx))) || (op_code ==
> > MLX5_CQE_INVALID)))
> >  		return MLX5_CQE_STATUS_HW_OWN;
> > -	rte_io_rmb();
> > +	/* Prevent speculative reading of other fields in CQE until
> > +	 * CQE is valid.
> > +	 */
> > +	rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
> > +
> >  	if (unlikely(op_code == MLX5_CQE_RESP_ERR ||
> >  		     op_code == MLX5_CQE_REQ_ERR))
> >  		return MLX5_CQE_STATUS_ERR;
> > --
> > 2.17.1
Honnappa Nagarahalli Sept. 27, 2022, 9:06 p.m. UTC | #5
<snip>

> 
> Hi, Honnappa
> 
> We discussed the barrier here:
> http://patches.dpdk.org/project/dpdk/patch/20210606164948.35997-1-
> honnappa.nagarahalli@arm.com/
> 
> (BTW, it is good practice to keep the reference to previous patch versions
> below Commit Message of the next ones).
Apologies, I did not understand this. I would like to fix this if I can understand it better.

> 
> This barrier is not about compiler ordering, it is about external HW agent
> memory action completions.
> So, I'm not sure the rte_atomic_thread_fence() is safe for x86 - patch impacts
> x86 as well.
> 
> With best regards,
> Slava
> 
> > -----Original Message-----
> > From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Sent: Tuesday, August 30, 2022 23:01
> > To: dev@dpdk.org; honnappa.nagarahalli@arm.com;
> ruifeng.wang@arm.com;
> > Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
> > Slava Ovsiienko <viacheslavo@nvidia.com>
> > Cc: nd@arm.com; Matan Azrad <matan@nvidia.com>; stable@dpdk.org
> > Subject: [PATCH v2] net/mlx5: use just sufficient barrier for Arm
> > platforms
> >
> > cqe->op_own indicates if the CQE is owned by the NIC. The rest of
> > the fields in CQE should be read only after op_own is read. On Arm
> > platforms using "dmb ishld" is sufficient to enforce this.
> >
> > Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error
> > handling")
> > Cc: matan@mellanox.com
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > ---
> >  drivers/common/mlx5/mlx5_common.h | 6 +++++-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/common/mlx5/mlx5_common.h
> > b/drivers/common/mlx5/mlx5_common.h
> > index 5028a05b49..ac2e85b15f 100644
> > --- a/drivers/common/mlx5/mlx5_common.h
> > +++ b/drivers/common/mlx5/mlx5_common.h
> > @@ -195,7 +195,11 @@ check_cqe(volatile struct mlx5_cqe *cqe, const
> > uint16_t cqes_n,
> >
> >  	if (unlikely((op_owner != (!!(idx))) || (op_code ==
> > MLX5_CQE_INVALID)))
> >  		return MLX5_CQE_STATUS_HW_OWN;
> > -	rte_io_rmb();
> > +	/* Prevent speculative reading of other fields in CQE until
> > +	 * CQE is valid.
> > +	 */
> > +	rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
> > +
> >  	if (unlikely(op_code == MLX5_CQE_RESP_ERR ||
> >  		     op_code == MLX5_CQE_REQ_ERR))
> >  		return MLX5_CQE_STATUS_ERR;
> > --
> > 2.17.1
Honnappa Nagarahalli Nov. 15, 2022, 1:45 a.m. UTC | #6
<snip>

> 
> >
> > Hi, Honnappa
> Hi Slava, thanks for the feedback.
> 
> >
> > We discussed the barrier here:
> > http://patches.dpdk.org/project/dpdk/patch/20210606164948.35997-1-
> > honnappa.nagarahalli@arm.com/
> Yes, I have changed the patch according to the discussion. i.e. barrier is
> needed, but different (inner sharable domain) barrier is required.
> 
> >
> > (BTW, it is good practice to keep the reference to previous patch
> > versions below Commit Message of the next ones).
> >
> > This barrier is not about compiler ordering, it is about external HW
> > agent memory action completions.
> > So, I'm not sure the rte_atomic_thread_fence() is safe for x86 - patch
> > impacts
> > x86 as well.
> The earlier barrier 'rte_io_rmb()', resolves to a compiler barrier on x86 [1].
> The rte_atomic_thread_fence(__ATOMIC_ACQUIRE) on x86 also acts as a
> compiler barrier. So, there is no change for x86.
> 
> 
> [1]
> https://github.com/DPDK/dpdk/blob/main/lib/eal/x86/include/rte_atomic.h#
> L80
Hi Slava, any more comments on this?

> 
> >
> > With best regards,
> > Slava
> >
> > > -----Original Message-----
> > > From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Sent: Tuesday, August 30, 2022 23:01
> > > To: dev@dpdk.org; honnappa.nagarahalli@arm.com;
> > ruifeng.wang@arm.com;
> > > Matan Azrad <matan@nvidia.com>; Shahaf Shuler
> <shahafs@nvidia.com>;
> > > Slava Ovsiienko <viacheslavo@nvidia.com>
> > > Cc: nd@arm.com; Matan Azrad <matan@nvidia.com>; stable@dpdk.org
> > > Subject: [PATCH v2] net/mlx5: use just sufficient barrier for Arm
> > > platforms
> > >
> > > cqe->op_own indicates if the CQE is owned by the NIC. The rest of
> > > the fields in CQE should be read only after op_own is read. On Arm
> > > platforms using "dmb ishld" is sufficient to enforce this.
> > >
> > > Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error
> > > handling")
> > > Cc: matan@mellanox.com
> > > Cc: stable@dpdk.org
> > >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > ---
> > >  drivers/common/mlx5/mlx5_common.h | 6 +++++-
> > >  1 file changed, 5 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/common/mlx5/mlx5_common.h
> > > b/drivers/common/mlx5/mlx5_common.h
> > > index 5028a05b49..ac2e85b15f 100644
> > > --- a/drivers/common/mlx5/mlx5_common.h
> > > +++ b/drivers/common/mlx5/mlx5_common.h
> > > @@ -195,7 +195,11 @@ check_cqe(volatile struct mlx5_cqe *cqe, const
> > > uint16_t cqes_n,
> > >
> > >  	if (unlikely((op_owner != (!!(idx))) || (op_code ==
> > > MLX5_CQE_INVALID)))
> > >  		return MLX5_CQE_STATUS_HW_OWN;
> > > -	rte_io_rmb();
> > > +	/* Prevent speculative reading of other fields in CQE until
> > > +	 * CQE is valid.
> > > +	 */
> > > +	rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
> > > +
> > >  	if (unlikely(op_code == MLX5_CQE_RESP_ERR ||
> > >  		     op_code == MLX5_CQE_REQ_ERR))
> > >  		return MLX5_CQE_STATUS_ERR;
> > > --
> > > 2.17.1
Slava Ovsiienko March 7, 2023, 4:07 p.m. UTC | #7
Hi, Honnappa

I'm sorry for delay - I had to play with patch, to compile this one with assembly listings
and check what code is actually generated on x86/ARM.

On x86 there is no difference at all (with and w/o patch), so no objection from my side.
On ARM we have:
w/o patch: dmb oshld
with patch: dmb ishld

What is the purpose of the barrier - to not allow the CQE read access reordering.
On x86, "compiler barrier" is quite enough (due to strong load/store ordering).
On ARM load/store might be reordered, AFAIU.

CQE resides in the host memory and can be directly written by the NIC via PCIe.
(In my understanding it can cause core cache(s) invalidations). 
I have a question - should we consider this as outer sharable domain?
Is it safe to have a barrier for inner domain only in our case?

We have updated the cqe_check() routine, sorry for this. Could you, please,
update the patch and send v3?

With best regards,
Slava

> -----Original Message-----
> From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Sent: вторник, 15 ноября 2022 г. 03:46
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org; Ruifeng Wang
> <Ruifeng.Wang@arm.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler
> <shahafs@nvidia.com>
> Cc: nd <nd@arm.com>; Matan Azrad <matan@nvidia.com>; stable@dpdk.org;
> nd <nd@arm.com>
> Subject: RE: [PATCH v2] net/mlx5: use just sufficient barrier for Arm platforms
> 
> <snip>
> 
> >
> > >
> > > Hi, Honnappa
> > Hi Slava, thanks for the feedback.
> >
> > >
> > > We discussed the barrier here:
> > > http://patches.dpdk.org/project/dpdk/patch/20210606164948.35997-1-
> > > honnappa.nagarahalli@arm.com/
> > Yes, I have changed the patch according to the discussion. i.e.
> > barrier is needed, but different (inner sharable domain) barrier is required.
> >
> > >
> > > (BTW, it is good practice to keep the reference to previous patch
> > > versions below Commit Message of the next ones).
> > >
> > > This barrier is not about compiler ordering, it is about external HW
> > > agent memory action completions.
> > > So, I'm not sure the rte_atomic_thread_fence() is safe for x86 -
> > > patch impacts
> > > x86 as well.
> > The earlier barrier 'rte_io_rmb()', resolves to a compiler barrier on x86 [1].
> > The rte_atomic_thread_fence(__ATOMIC_ACQUIRE) on x86 also acts as a
> > compiler barrier. So, there is no change for x86.
> >
> >
> > [1]
> > https://github.com/DPDK/dpdk/blob/main/lib/eal/x86/include/rte_atomic.
> > h#
> > L80
> Hi Slava, any more comments on this?
> 
> >
> > >
> > > With best regards,
> > > Slava
> > >
> > > > -----Original Message-----
> > > > From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > Sent: Tuesday, August 30, 2022 23:01
> > > > To: dev@dpdk.org; honnappa.nagarahalli@arm.com;
> > > ruifeng.wang@arm.com;
> > > > Matan Azrad <matan@nvidia.com>; Shahaf Shuler
> > <shahafs@nvidia.com>;
> > > > Slava Ovsiienko <viacheslavo@nvidia.com>
> > > > Cc: nd@arm.com; Matan Azrad <matan@nvidia.com>; stable@dpdk.org
> > > > Subject: [PATCH v2] net/mlx5: use just sufficient barrier for Arm
> > > > platforms
> > > >
> > > > cqe->op_own indicates if the CQE is owned by the NIC. The rest of
> > > > the fields in CQE should be read only after op_own is read. On Arm
> > > > platforms using "dmb ishld" is sufficient to enforce this.
> > > >
> > > > Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error
> > > > handling")
> > > > Cc: matan@mellanox.com
> > > > Cc: stable@dpdk.org
> > > >
> > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > ---
> > > >  drivers/common/mlx5/mlx5_common.h | 6 +++++-
> > > >  1 file changed, 5 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/common/mlx5/mlx5_common.h
> > > > b/drivers/common/mlx5/mlx5_common.h
> > > > index 5028a05b49..ac2e85b15f 100644
> > > > --- a/drivers/common/mlx5/mlx5_common.h
> > > > +++ b/drivers/common/mlx5/mlx5_common.h
> > > > @@ -195,7 +195,11 @@ check_cqe(volatile struct mlx5_cqe *cqe,
> > > > const uint16_t cqes_n,
> > > >
> > > >  	if (unlikely((op_owner != (!!(idx))) || (op_code ==
> > > > MLX5_CQE_INVALID)))
> > > >  		return MLX5_CQE_STATUS_HW_OWN;
> > > > -	rte_io_rmb();
> > > > +	/* Prevent speculative reading of other fields in CQE until
> > > > +	 * CQE is valid.
> > > > +	 */
> > > > +	rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
> > > > +
> > > >  	if (unlikely(op_code == MLX5_CQE_RESP_ERR ||
> > > >  		     op_code == MLX5_CQE_REQ_ERR))
> > > >  		return MLX5_CQE_STATUS_ERR;
> > > > --
> > > > 2.17.1
Honnappa Nagarahalli March 9, 2023, 2:42 a.m. UTC | #8
> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Tuesday, March 7, 2023 10:08 AM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; dev@dpdk.org;
> Ruifeng Wang <Ruifeng.Wang@arm.com>; Matan Azrad <matan@nvidia.com>;
> Shahaf Shuler <shahafs@nvidia.com>
> Cc: nd <nd@arm.com>; Matan Azrad <matan@nvidia.com>; stable@dpdk.org;
> nd <nd@arm.com>
> Subject: RE: [PATCH v2] net/mlx5: use just sufficient barrier for Arm platforms
> 
> Hi, Honnappa
> 
> I'm sorry for delay - I had to play with patch, to compile this one with assembly
> listings and check what code is actually generated on x86/ARM.
> 
> On x86 there is no difference at all (with and w/o patch), so no objection from
> my side.
> On ARM we have:
> w/o patch: dmb oshld
> with patch: dmb ishld
> 
> What is the purpose of the barrier - to not allow the CQE read access reordering.
> On x86, "compiler barrier" is quite enough (due to strong load/store ordering).
> On ARM load/store might be reordered, AFAIU.
> 
> CQE resides in the host memory and can be directly written by the NIC via PCIe.
> (In my understanding it can cause core cache(s) invalidations).
> I have a question - should we consider this as outer sharable domain?
> Is it safe to have a barrier for inner domain only in our case?
Inner domain is enough, hence the reason for this patch.

> 
> We have updated the cqe_check() routine, sorry for this. Could you, please,
> update the patch and send v3?
Sent V3

> 
> With best regards,
> Slava
> 
> > -----Original Message-----
> > From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > Sent: вторник, 15 ноября 2022 г. 03:46
> > To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org; Ruifeng
> > Wang <Ruifeng.Wang@arm.com>; Matan Azrad <matan@nvidia.com>;
> Shahaf
> > Shuler <shahafs@nvidia.com>
> > Cc: nd <nd@arm.com>; Matan Azrad <matan@nvidia.com>;
> stable@dpdk.org;
> > nd <nd@arm.com>
> > Subject: RE: [PATCH v2] net/mlx5: use just sufficient barrier for Arm
> > platforms
> >
> > <snip>
> >
> > >
> > > >
> > > > Hi, Honnappa
> > > Hi Slava, thanks for the feedback.
> > >
> > > >
> > > > We discussed the barrier here:
> > > > http://patches.dpdk.org/project/dpdk/patch/20210606164948.35997-1-
> > > > honnappa.nagarahalli@arm.com/
> > > Yes, I have changed the patch according to the discussion. i.e.
> > > barrier is needed, but different (inner sharable domain) barrier is required.
> > >
> > > >
> > > > (BTW, it is good practice to keep the reference to previous patch
> > > > versions below Commit Message of the next ones).
> > > >
> > > > This barrier is not about compiler ordering, it is about external
> > > > HW agent memory action completions.
> > > > So, I'm not sure the rte_atomic_thread_fence() is safe for x86 -
> > > > patch impacts
> > > > x86 as well.
> > > The earlier barrier 'rte_io_rmb()', resolves to a compiler barrier on x86 [1].
> > > The rte_atomic_thread_fence(__ATOMIC_ACQUIRE) on x86 also acts as a
> > > compiler barrier. So, there is no change for x86.
> > >
> > >
> > > [1]
> > > https://github.com/DPDK/dpdk/blob/main/lib/eal/x86/include/rte_atomic.
> > > h#
> > > L80
> > Hi Slava, any more comments on this?
> >
> > >
> > > >
> > > > With best regards,
> > > > Slava
> > > >
> > > > > -----Original Message-----
> > > > > From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > > Sent: Tuesday, August 30, 2022 23:01
> > > > > To: dev@dpdk.org; honnappa.nagarahalli@arm.com;
> > > > ruifeng.wang@arm.com;
> > > > > Matan Azrad <matan@nvidia.com>; Shahaf Shuler
> > > <shahafs@nvidia.com>;
> > > > > Slava Ovsiienko <viacheslavo@nvidia.com>
> > > > > Cc: nd@arm.com; Matan Azrad <matan@nvidia.com>; stable@dpdk.org
> > > > > Subject: [PATCH v2] net/mlx5: use just sufficient barrier for
> > > > > Arm platforms
> > > > >
> > > > > cqe->op_own indicates if the CQE is owned by the NIC. The rest
> > > > > cqe->of
> > > > > the fields in CQE should be read only after op_own is read. On
> > > > > Arm platforms using "dmb ishld" is sufficient to enforce this.
> > > > >
> > > > > Fixes: 88c0733535d6 ("net/mlx5: extend Rx completion with error
> > > > > handling")
> > > > > Cc: matan@mellanox.com
> > > > > Cc: stable@dpdk.org
> > > > >
> > > > > Signed-off-by: Honnappa Nagarahalli
> > > > > <honnappa.nagarahalli@arm.com>
> > > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > > ---
> > > > >  drivers/common/mlx5/mlx5_common.h | 6 +++++-
> > > > >  1 file changed, 5 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/drivers/common/mlx5/mlx5_common.h
> > > > > b/drivers/common/mlx5/mlx5_common.h
> > > > > index 5028a05b49..ac2e85b15f 100644
> > > > > --- a/drivers/common/mlx5/mlx5_common.h
> > > > > +++ b/drivers/common/mlx5/mlx5_common.h
> > > > > @@ -195,7 +195,11 @@ check_cqe(volatile struct mlx5_cqe *cqe,
> > > > > const uint16_t cqes_n,
> > > > >
> > > > >  	if (unlikely((op_owner != (!!(idx))) || (op_code ==
> > > > > MLX5_CQE_INVALID)))
> > > > >  		return MLX5_CQE_STATUS_HW_OWN;
> > > > > -	rte_io_rmb();
> > > > > +	/* Prevent speculative reading of other fields in CQE until
> > > > > +	 * CQE is valid.
> > > > > +	 */
> > > > > +	rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
> > > > > +
> > > > >  	if (unlikely(op_code == MLX5_CQE_RESP_ERR ||
> > > > >  		     op_code == MLX5_CQE_REQ_ERR))
> > > > >  		return MLX5_CQE_STATUS_ERR;
> > > > > --
> > > > > 2.17.1
diff mbox series

Patch

diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 5028a05b49..a4c29f51f1 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -195,7 +195,7 @@  check_cqe(volatile struct mlx5_cqe *cqe, const uint16_t cqes_n,
 
 	if (unlikely((op_owner != (!!(idx))) || (op_code == MLX5_CQE_INVALID)))
 		return MLX5_CQE_STATUS_HW_OWN;
-	rte_io_rmb();
+
 	if (unlikely(op_code == MLX5_CQE_RESP_ERR ||
 		     op_code == MLX5_CQE_REQ_ERR))
 		return MLX5_CQE_STATUS_ERR;