mbox series

[net-next,0/7] net: hns3: updates for -next

Message ID 1607571732-24219-1-git-send-email-tanhuazhong@huawei.com
Headers show
Series net: hns3: updates for -next | expand

Message

Huazhong Tan Dec. 10, 2020, 3:42 a.m. UTC
This patchset adds support for tc mqprio offload, hw tc
offload of tc flower, and adpation for max rss size changes.

Guojia Liao (3):
  net: hns3: add support for max 512 rss size
  net: hns3: adjust rss indirection table configure command
  net: hns3: adjust rss tc mode configure command

Jian Shen (4):
  net: hns3: refine the struct hane3_tc_info
  net: hns3: add support for tc mqprio offload
  net: hns3: add support for forwarding packet to queues of specified TC
    when flow director rule hit
  net: hns3: add support for hw tc offload of tc flower

 drivers/net/ethernet/hisilicon/hns3/hnae3.h        |  34 +-
 drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c |   3 +-
 drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    | 112 ++++-
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c |   2 +
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h |  16 +-
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c | 126 +++++-
 .../ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c |   2 +-
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c    | 466 ++++++++++++++++++---
 .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h    |  28 +-
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c |   2 +-
 .../net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c  |  98 +++--
 .../ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c  |  21 +-
 12 files changed, 759 insertions(+), 151 deletions(-)

Comments

David Miller Dec. 10, 2020, 4:34 a.m. UTC | #1
From: Huazhong Tan <tanhuazhong@huawei.com>
Date: Thu, 10 Dec 2020 11:42:05 +0800

> This patchset adds support for tc mqprio offload, hw tc
> offload of tc flower, and adpation for max rss size changes.

ZSeries applied, thanks.
Saeed Mahameed Dec. 10, 2020, 4:50 a.m. UTC | #2
On Thu, 2020-12-10 at 11:42 +0800, Huazhong Tan wrote:
> From: Jian Shen <shenjian15@huawei.com>
> 
> Currently, the HNS3 driver only supports offload for tc number
> and prio_tc. This patch adds support for other qopts, including
> queues count and offset for each tc.
> 
> When enable tc mqprio offload, it's not allowed to change
> queue numbers by ethtool. For hardware limitation, the queue
> number of each tc should be power of 2.
> 
> For the queues is not assigned to each tc by average, so it's
> should return vport->alloc_tqps for hclge_get_max_channels().
> 

The commit message needs some improvements, it is not really clear what
the last two sentences are about.

> Signed-off-by: Jian Shen <shenjian15@huawei.com>
> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com>
> ---
> 
... 

>  
> +	if (kinfo->tc_info.mqprio_active) {
> +		dev_err(&netdev->dev,

why not use netdev_err() and friends ?
anyway I see your driver is using dev_err(&netdev->dev, ...)
intensively, 
maybe submit a follow up patch to fix all your prints ? 

...]
>  
> +static int hclge_mqprio_qopt_check(struct hclge_dev *hdev,
> +				   struct tc_mqprio_qopt_offload
> *mqprio_qopt)
> +{
> +	u16 queue_sum = 0;
> +	int ret;
> +	int i;
> +
> +	if (!mqprio_qopt->qopt.num_tc) {
> +		mqprio_qopt->qopt.num_tc = 1;
> +		return 0;
> +	}
> +
> +	ret = hclge_dcb_common_validate(hdev, mqprio_qopt->qopt.num_tc,
> +					mqprio_qopt->qopt.prio_tc_map);
> +	if (ret)
> +		return ret;
> +
> +	for (i = 0; i < mqprio_qopt->qopt.num_tc; i++) {
> +		if (!is_power_of_2(mqprio_qopt->qopt.count[i])) {
> +			dev_err(&hdev->pdev->dev,
> +				"qopt queue count must be power of
> 2\n");
> +			return -EINVAL;
> +		}
> +
> +		if (mqprio_qopt->qopt.count[i] > hdev->rss_size_max) {
> +			dev_err(&hdev->pdev->dev,
> +				"qopt queue count should be no more
> than %u\n",
> +				hdev->rss_size_max);
> +			return -EINVAL;
> +		}
> +
> +		if (mqprio_qopt->qopt.offset[i] != queue_sum) {
> +			dev_err(&hdev->pdev->dev,
> +				"qopt queue offset must start from 0,
> and being continuous\n");
> +			return -EINVAL;
> +		}
> +
> +		if (mqprio_qopt->min_rate[i] || mqprio_qopt-
> >max_rate[i]) {
> +			dev_err(&hdev->pdev->dev,
> +				"qopt tx_rate is not supported\n");
> +			return -EOPNOTSUPP;
> +		}
> +
> +		queue_sum = mqprio_qopt->qopt.offset[i];
> +		queue_sum += mqprio_qopt->qopt.count[i];

it will make more sense if you moved this queue summing outside of the
loop

> +	}
> +	if (hdev->vport[0].alloc_tqps < queue_sum) {

can't you just allocate new tqps according to the new mqprio input like
other drivers do ? how the user allocates those tqps ? 

> +		dev_err(&hdev->pdev->dev,
> +			"qopt queue count sum should be less than
> %u\n",
> +			hdev->vport[0].alloc_tqps);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static void hclge_sync_mqprio_qopt(struct hnae3_tc_info *tc_info,
> +				   struct tc_mqprio_qopt_offload
> *mqprio_qopt)
> +{
> +	int i;
> +
> +	memset(tc_info, 0, sizeof(*tc_info));
> +	tc_info->num_tc = mqprio_qopt->qopt.num_tc;
> +	memcpy(tc_info->prio_tc, mqprio_qopt->qopt.prio_tc_map,
> +	       sizeof_field(struct hnae3_tc_info, prio_tc));
> +	memcpy(tc_info->tqp_count, mqprio_qopt->qopt.count,
> +	       sizeof_field(struct hnae3_tc_info, tqp_count));
> +	memcpy(tc_info->tqp_offset, mqprio_qopt->qopt.offset,
> +	       sizeof_field(struct hnae3_tc_info, tqp_offset));
> +

isn't it much easier to just store a copy of tc_mqprio_qopt in you
tc_info and then just:
tc_info->qopt = mqprio->qopt;

[...] 
> -	hclge_tm_schd_info_update(hdev, tc);
> -	hclge_tm_prio_tc_info_update(hdev, prio_tc);
> -
> -	ret = hclge_tm_init_hw(hdev, false);
> -	if (ret)
> -		goto err_out;
> +	kinfo = &vport->nic.kinfo;
> +	memcpy(&old_tc_info, &kinfo->tc_info, sizeof(old_tc_info));

if those are of the same kind, just normal assignment would be much
cleaner. 
> +	hclge_sync_mqprio_qopt(&kinfo->tc_info, mqprio_qopt);
> +	kinfo->tc_info.mqprio_active = tc > 0;
>  
> -	ret = hclge_client_setup_tc(hdev);
> +	ret = hclge_config_tc(hdev, &kinfo->tc_info);
>  	if (ret)
>  		goto err_out;
>  
> @@ -436,6 +534,12 @@ static int hclge_setup_tc(struct hnae3_handle
> *h, u8 tc, u8 *prio_tc)
>  	return hclge_notify_init_up(hdev);
>  
>  err_out:
> +	/* roll-back */
> +	memcpy(&kinfo->tc_info, &old_tc_info, sizeof(old_tc_info));
same.
Huazhong Tan Dec. 10, 2020, 12:24 p.m. UTC | #3
On 2020/12/10 13:40, Saeed Mahameed wrote:
> On Thu, 2020-12-10 at 11:42 +0800, Huazhong Tan wrote:
>> From: Jian Shen <shenjian15@huawei.com>
>>
>> For some new device, it supports forwarding packet to queues
>> of specified TC when flow director rule hit. So extend the
>> command handle to support it.
>>
> 
> ...
> 
>>   static int hclge_config_action(struct hclge_dev *hdev, u8 stage,
>>   			       struct hclge_fd_rule *rule)
>>   {
>> +	struct hclge_vport *vport = hdev->vport;
>> +	struct hnae3_knic_private_info *kinfo = &vport->nic.kinfo;
>>   	struct hclge_fd_ad_data ad_data;
>>   
>> +	memset(&ad_data, 0, sizeof(struct hclge_fd_ad_data));
>>   	ad_data.ad_id = rule->location;
>>   
>>   	if (rule->action == HCLGE_FD_ACTION_DROP_PACKET) {
>>   		ad_data.drop_packet = true;
>> -		ad_data.forward_to_direct_queue = false;
>> -		ad_data.queue_id = 0;
>> +	} else if (rule->action == HCLGE_FD_ACTION_SELECT_TC) {
>> +		ad_data.override_tc = true;
>> +		ad_data.queue_id =
>> +			kinfo->tc_info.tqp_offset[rule->tc];
>> +		ad_data.tc_size =
>> +			ilog2(kinfo->tc_info.tqp_count[rule->tc]);
> 
> In the previous patch you copied this info from mqprio, which is an
> egress qdisc feature, this patch is clearly about rx flow director, I
> think the patch is missing some context otherwise it doesn't make any
> sense.
> 

Since tx and rx are in the same tqp, what we do here is to make tx and 
rx in the same tc when rule is hit.

>>   	} else {
>> -		ad_data.drop_packet = false;
>>   		ad_data.forward_to_direct_queue = true;
>>   		ad_data.queue_id = rule->queue_id;
>>   	}
>> @@ -5937,7 +5950,7 @@ static int hclge_add_fd_entry(struct
>> hnae3_handle *handle,
>>   			return -EINVAL;
>>   		}
>>   
>> -		action = HCLGE_FD_ACTION_ACCEPT_PACKET;
>> +		action = HCLGE_FD_ACTION_SELECT_QUEUE;
>>   		q_index = ring;
>>   	}
>>   
>> diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
>> b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
>> index b3c1301..a481064 100644
>> --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
>> +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
>> @@ -572,8 +572,9 @@ enum HCLGE_FD_PACKET_TYPE {
>>   };
>>   
>>   enum HCLGE_FD_ACTION {
>> -	HCLGE_FD_ACTION_ACCEPT_PACKET,
>> +	HCLGE_FD_ACTION_SELECT_QUEUE,
>>   	HCLGE_FD_ACTION_DROP_PACKET,
>> +	HCLGE_FD_ACTION_SELECT_TC,
> 
> what is SELECT_TC ? you never actually write this value anywhere  in
> this patch.
> 

HCLGE_FD_ACTION_SELECT_TC means that the packet will be forwarded into 
the queue of specified TC when rule is hit.

the assignment is in the next patch, maybe these two patch should be 
merged for making it more readable.


Thanks.
Huazhong.

> 
> 
> .
>
Saeed Mahameed Dec. 10, 2020, 8:24 p.m. UTC | #4
On Thu, 2020-12-10 at 20:27 +0800, tanhuazhong wrote:
> 
> On 2020/12/10 12:50, Saeed Mahameed wrote:
> > On Thu, 2020-12-10 at 11:42 +0800, Huazhong Tan wrote:
> > > From: Jian Shen <shenjian15@huawei.com>
> > > 
> > > Currently, the HNS3 driver only supports offload for tc number
> > > and prio_tc. This patch adds support for other qopts, including
> > > queues count and offset for each tc.
> > > 
> > > When enable tc mqprio offload, it's not allowed to change
> > > queue numbers by ethtool. For hardware limitation, the queue
> > > number of each tc should be power of 2.
> > > 
> > > For the queues is not assigned to each tc by average, so it's
> > > should return vport->alloc_tqps for hclge_get_max_channels().
> > > 
> > 
> > The commit message needs some improvements, it is not really clear
> > what
> > the last two sentences are about.
> > 
> 
> The hclge_get_max_channels() returns the max queue number of each TC
> for
> user can set by command ethool -L. In previous implement, the queues
> are
> assigned to each TC by average, so we return it by vport-:
> alloc_tqps / num_tc. And now we can assign differrent queue number
> for
> each TC, so it shouldn't be divided by num_tc.

What do you mean by "queues assigned to each tc by average" ?

[...]

>   
> > > +	}
> > > +	if (hdev->vport[0].alloc_tqps < queue_sum) {
> > 
> > can't you just allocate new tqps according to the new mqprio input
> > like
> > other drivers do ? how the user allocates those tqps ?
> > 
> 
> maybe the name of 'alloc_tqps' is a little bit misleading, the
> meaning
> of this field is the total number of the available tqps in this
> vport.
> 

from your driver code it seems alloc_tqps is number of rings allocated
via ethool -L.

My point is, it seems like in this patch you demand for the queues to
be preallocated, but what other drivers do on setup tc, they just
duplicate what ever number of queues was configured prior to setup tc,
num_tc times.

> > > +		dev_err(&hdev->pdev->dev,
> > > +			"qopt queue count sum should be less than
> > > %u\n",
> > > +			hdev->vport[0].alloc_tqps);
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static void hclge_sync_mqprio_qopt(struct hnae3_tc_info
> > > *tc_info,
> > > +				   struct tc_mqprio_qopt_offload
> > > *mqprio_qopt)
> > > +{
> > > +	int i;
> > > +
> > > +	memset(tc_info, 0, sizeof(*tc_info));
> > > +	tc_info->num_tc = mqprio_qopt->qopt.num_tc;
> > > +	memcpy(tc_info->prio_tc, mqprio_qopt->qopt.prio_tc_map,
> > > +	       sizeof_field(struct hnae3_tc_info, prio_tc));
> > > +	memcpy(tc_info->tqp_count, mqprio_qopt->qopt.count,
> > > +	       sizeof_field(struct hnae3_tc_info, tqp_count));
> > > +	memcpy(tc_info->tqp_offset, mqprio_qopt->qopt.offset,
> > > +	       sizeof_field(struct hnae3_tc_info, tqp_offset));
> > > +
> > 
> > isn't it much easier to just store a copy of tc_mqprio_qopt in you
> > tc_info and then just:
> > tc_info->qopt = mqprio->qopt;
> > 
> > [...]
> 
> The tc_mqprio_qopt_offload still contains a lot of opt hns3 driver
> does
> not use yet, even if the hns3 use all the opt, I still think it is
> better to create our own struct, if struct tc_mqprio_qopt_offload
> changes in the future, we can limit the change to the
> tc_mqprio_qopt_offload convertion.
> 

ok.
Huazhong Tan Dec. 11, 2020, 6:55 a.m. UTC | #5
On 2020/12/11 4:24, Saeed Mahameed wrote:
> On Thu, 2020-12-10 at 20:27 +0800, tanhuazhong wrote:

>>

>> On 2020/12/10 12:50, Saeed Mahameed wrote:

>>> On Thu, 2020-12-10 at 11:42 +0800, Huazhong Tan wrote:

>>>> From: Jian Shen <shenjian15@huawei.com>

>>>>

>>>> Currently, the HNS3 driver only supports offload for tc number

>>>> and prio_tc. This patch adds support for other qopts, including

>>>> queues count and offset for each tc.

>>>>

>>>> When enable tc mqprio offload, it's not allowed to change

>>>> queue numbers by ethtool. For hardware limitation, the queue

>>>> number of each tc should be power of 2.

>>>>

>>>> For the queues is not assigned to each tc by average, so it's

>>>> should return vport->alloc_tqps for hclge_get_max_channels().

>>>>

>>>

>>> The commit message needs some improvements, it is not really clear

>>> what

>>> the last two sentences are about.

>>>

>>

>> The hclge_get_max_channels() returns the max queue number of each TC

>> for

>> user can set by command ethool -L. In previous implement, the queues

>> are

>> assigned to each TC by average, so we return it by vport-:

>> alloc_tqps / num_tc. And now we can assign differrent queue number

>> for

>> each TC, so it shouldn't be divided by num_tc.

> 

> What do you mean by "queues assigned to each tc by average" ?

> 


In previous implement the number of queue in each TC is same, now, we
allow that the number of queue in each TC is different.

> [...]

> 

>>    

>>>> +	}

>>>> +	if (hdev->vport[0].alloc_tqps < queue_sum) {

>>>

>>> can't you just allocate new tqps according to the new mqprio input

>>> like

>>> other drivers do ? how the user allocates those tqps ?

>>>

>>

>> maybe the name of 'alloc_tqps' is a little bit misleading, the

>> meaning

>> of this field is the total number of the available tqps in this

>> vport.

>>

> 

> from your driver code it seems alloc_tqps is number of rings allocated

> via ethool -L.

> 

> My point is, it seems like in this patch you demand for the queues to

> be preallocated, but what other drivers do on setup tc, they just

> duplicate what ever number of queues was configured prior to setup tc,

> num_tc times.

> 


The maximum number of queues is defined by 'alloc_tqps', not 
preallocated queues. The behavior on setup tc of HNS3 is same as other 
driver.

'alloc_tqps' in HNS3 has the same means as 'num_queue_pairs' in i40e below
if (vsi->num_queue_pairs <
	    (mqprio_qopt->qopt.offset[i] + mqprio_qopt->qopt.count[i])) {
		return -EINVAL;
}

>>>> +		dev_err(&hdev->pdev->dev,

>>>> +			"qopt queue count sum should be less than

>>>> %u\n",

>>>> +			hdev->vport[0].alloc_tqps);

>>>> +		return -EINVAL;

>>>> +	}

>>>> +

>>>> +	return 0;

>>>> +}

>>>> +

>>>> +static void hclge_sync_mqprio_qopt(struct hnae3_tc_info

>>>> *tc_info,

>>>> +				   struct tc_mqprio_qopt_offload

>>>> *mqprio_qopt)

>>>> +{

>>>> +	int i;

>>>> +

>>>> +	memset(tc_info, 0, sizeof(*tc_info));

>>>> +	tc_info->num_tc = mqprio_qopt->qopt.num_tc;

>>>> +	memcpy(tc_info->prio_tc, mqprio_qopt->qopt.prio_tc_map,

>>>> +	       sizeof_field(struct hnae3_tc_info, prio_tc));

>>>> +	memcpy(tc_info->tqp_count, mqprio_qopt->qopt.count,

>>>> +	       sizeof_field(struct hnae3_tc_info, tqp_count));

>>>> +	memcpy(tc_info->tqp_offset, mqprio_qopt->qopt.offset,

>>>> +	       sizeof_field(struct hnae3_tc_info, tqp_offset));

>>>> +

>>>

>>> isn't it much easier to just store a copy of tc_mqprio_qopt in you

>>> tc_info and then just:

>>> tc_info->qopt = mqprio->qopt;

>>>

>>> [...]

>>

>> The tc_mqprio_qopt_offload still contains a lot of opt hns3 driver

>> does

>> not use yet, even if the hns3 use all the opt, I still think it is

>> better to create our own struct, if struct tc_mqprio_qopt_offload

>> changes in the future, we can limit the change to the

>> tc_mqprio_qopt_offload convertion.

>>

> 

> ok.

> 


Thanks.
Huazhong.

> 

> 

> .

>
Huazhong Tan Dec. 11, 2020, 7:06 a.m. UTC | #6
On 2020/12/11 4:46, Saeed Mahameed wrote:
> On Thu, 2020-12-10 at 20:24 +0800, tanhuazhong wrote:

>>

>> On 2020/12/10 13:40, Saeed Mahameed wrote:

>>> On Thu, 2020-12-10 at 11:42 +0800, Huazhong Tan wrote:

>>>> From: Jian Shen <shenjian15@huawei.com>

>>>>

>>>> For some new device, it supports forwarding packet to queues

>>>> of specified TC when flow director rule hit. So extend the

>>>> command handle to support it.

>>>>

>>>

>>> ...

>>>

>>>>    static int hclge_config_action(struct hclge_dev *hdev, u8

>>>> stage,

>>>>    			       struct hclge_fd_rule *rule)

>>>>    {

>>>> +	struct hclge_vport *vport = hdev->vport;

>>>> +	struct hnae3_knic_private_info *kinfo = &vport->nic.kinfo;

>>>>    	struct hclge_fd_ad_data ad_data;

>>>>    

>>>> +	memset(&ad_data, 0, sizeof(struct hclge_fd_ad_data));

>>>>    	ad_data.ad_id = rule->location;

>>>>    

>>>>    	if (rule->action == HCLGE_FD_ACTION_DROP_PACKET) {

>>>>    		ad_data.drop_packet = true;

>>>> -		ad_data.forward_to_direct_queue = false;

>>>> -		ad_data.queue_id = 0;

>>>> +	} else if (rule->action == HCLGE_FD_ACTION_SELECT_TC) {

>>>> +		ad_data.override_tc = true;

>>>> +		ad_data.queue_id =

>>>> +			kinfo->tc_info.tqp_offset[rule->tc];

>>>> +		ad_data.tc_size =

>>>> +			ilog2(kinfo->tc_info.tqp_count[rule->tc]);

>>>

>>> In the previous patch you copied this info from mqprio, which is an

>>> egress qdisc feature, this patch is clearly about rx flow director,

>>> I

>>> think the patch is missing some context otherwise it doesn't make

>>> any

>>> sense.

>>>

>>

>> Since tx and rx are in the same tqp, what we do here is to make tx

>> and

>> rx in the same tc when rule is hit.

>>

> 

> this needs more clarification, even if tx and rx are the same hw

> object, AFAIK there is not correlation between mqprio and tc rx

> classifiers.

> 


Could comment below make the code more readable?
"Since tx and rx are in the same tqp, if hit rule, forward the packet to 
the rx queues pair with specified TC"

>>>>    	} else {

>>>> -		ad_data.drop_packet = false;

>>>>    		ad_data.forward_to_direct_queue = true;

>>>>    		ad_data.queue_id = rule->queue_id;

>>>>    	}

>>>> @@ -5937,7 +5950,7 @@ static int hclge_add_fd_entry(struct

>>>> hnae3_handle *handle,

>>>>    			return -EINVAL;

>>>>    		}

>>>>    

>>>> -		action = HCLGE_FD_ACTION_ACCEPT_PACKET;

>>>> +		action = HCLGE_FD_ACTION_SELECT_QUEUE;

>>>>    		q_index = ring;

>>>>    	}

>>>>    

>>>> diff --git

>>>> a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h

>>>> b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h

>>>> index b3c1301..a481064 100644

>>>> --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h

>>>> +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h

>>>> @@ -572,8 +572,9 @@ enum HCLGE_FD_PACKET_TYPE {

>>>>    };

>>>>    

>>>>    enum HCLGE_FD_ACTION {

>>>> -	HCLGE_FD_ACTION_ACCEPT_PACKET,

>>>> +	HCLGE_FD_ACTION_SELECT_QUEUE,

>>>>    	HCLGE_FD_ACTION_DROP_PACKET,

>>>> +	HCLGE_FD_ACTION_SELECT_TC,

>>>

>>> what is SELECT_TC ? you never actually write this value

>>> anywhere  in

>>> this patch.

>>>

>>

>> HCLGE_FD_ACTION_SELECT_TC means that the packet will be forwarded

>> into

>> the queue of specified TC when rule is hit.

>>

> what is "specified TC" in this context ?


TC specified by 'tc flower'

> 

> Are we talking about ethtool nfc steering here ? because clearly this

> was the purpose of HCLGE_FD_ACTION_ACCEPT_PACKET before it got removed.

> 


In fact, HCLGE_FD_ACTION_ACCEPT_PACKET is splitted in to 
HCLGE_FD_ACTION_SELECT_QUEUE and HCLGE_FD_ACTION_SELECT_TC.
HCLGE_FD_ACTION_SELECT_QUEUE is configured by 'ethtool -U' (nfc 
steering) or aRFS.
HCLGE_FD_ACTION_SELECT_TC is configured by 'tc flower' so far.


> 

>> the assignment is in the next patch, maybe these two patch should be

>> merged for making it more readable.

>>

>>

>> Thanks.

>> Huazhong.

>>

>>>

>>> .

>>>

> 

> 

> .

>