diff mbox series

[net-next,v7,4/5] net/x25: fix restart request/confirm handling

Message ID 20201126063557.1283-5-ms@dev.tdt.de
State New
Headers show
Series net/x25: netdev event handling | expand

Commit Message

Martin Schiller Nov. 26, 2020, 6:35 a.m. UTC
We have to take the actual link state into account to handle
restart requests/confirms well.

Signed-off-by: Martin Schiller <ms@dev.tdt.de>
---
 net/x25/x25_link.c | 41 +++++++++++++++++++++++++++++++++--------
 1 file changed, 33 insertions(+), 8 deletions(-)

Comments

Xie He Dec. 9, 2020, 9:01 a.m. UTC | #1
On Wed, Nov 25, 2020 at 10:36 PM Martin Schiller <ms@dev.tdt.de> wrote:
>

> We have to take the actual link state into account to handle

> restart requests/confirms well.

>

> @@ -214,8 +241,6 @@ void x25_link_established(struct x25_neigh *nb)

>  {

>         switch (nb->state) {

>         case X25_LINK_STATE_0:

> -               nb->state = X25_LINK_STATE_2;

> -               break;

>         case X25_LINK_STATE_1:

>                 x25_transmit_restart_request(nb);

>                 nb->state = X25_LINK_STATE_2;


What is the reason for this change? Originally only the connecting
side will transmit a Restart Request; the connected side will not and
will only wait for the Restart Request to come. Now both sides will
transmit Restart Requests at the same time. I think we should better
avoid collision situations like this.
Xie He Dec. 9, 2020, 9:17 a.m. UTC | #2
On Wed, Dec 9, 2020 at 1:01 AM Xie He <xie.he.0141@gmail.com> wrote:
>

> On Wed, Nov 25, 2020 at 10:36 PM Martin Schiller <ms@dev.tdt.de> wrote:

> >

> >         switch (nb->state) {

> >         case X25_LINK_STATE_0:

> > -               nb->state = X25_LINK_STATE_2;

> > -               break;

> >         case X25_LINK_STATE_1:

> >                 x25_transmit_restart_request(nb);

> >                 nb->state = X25_LINK_STATE_2;

>

> What is the reason for this change? Originally only the connecting

> side will transmit a Restart Request; the connected side will not and

> will only wait for the Restart Request to come. Now both sides will

> transmit Restart Requests at the same time. I think we should better

> avoid collision situations like this.


Oh. I see. Because in other patches we are giving L2 the ability to
connect by itself, both sides can now appear here to be the
"connected" side. So we can't make the "connected" side wait as we did
before.
Martin Schiller Dec. 9, 2020, 9:40 a.m. UTC | #3
On 2020-12-09 10:17, Xie He wrote:
> On Wed, Dec 9, 2020 at 1:01 AM Xie He <xie.he.0141@gmail.com> wrote:

>> 

>> On Wed, Nov 25, 2020 at 10:36 PM Martin Schiller <ms@dev.tdt.de> 

>> wrote:

>> >

>> >         switch (nb->state) {

>> >         case X25_LINK_STATE_0:

>> > -               nb->state = X25_LINK_STATE_2;

>> > -               break;

>> >         case X25_LINK_STATE_1:

>> >                 x25_transmit_restart_request(nb);

>> >                 nb->state = X25_LINK_STATE_2;

>> 

>> What is the reason for this change? Originally only the connecting

>> side will transmit a Restart Request; the connected side will not and

>> will only wait for the Restart Request to come. Now both sides will

>> transmit Restart Requests at the same time. I think we should better

>> avoid collision situations like this.

> 

> Oh. I see. Because in other patches we are giving L2 the ability to

> connect by itself, both sides can now appear here to be the

> "connected" side. So we can't make the "connected" side wait as we did

> before.


Right.
By the way: A "Restart Collision" is in practice a very common event to
establish the Layer 3.
Xie He Dec. 9, 2020, 9:47 a.m. UTC | #4
On Wed, Dec 9, 2020 at 1:41 AM Martin Schiller <ms@dev.tdt.de> wrote:
>

> Right.

> By the way: A "Restart Collision" is in practice a very common event to

> establish the Layer 3.


Oh, I see. Thanks!
Xie He Dec. 9, 2020, 10:11 p.m. UTC | #5
On Wed, Dec 9, 2020 at 1:47 AM Xie He <xie.he.0141@gmail.com> wrote:
>

> On Wed, Dec 9, 2020 at 1:41 AM Martin Schiller <ms@dev.tdt.de> wrote:

> >

> > Right.

> > By the way: A "Restart Collision" is in practice a very common event to

> > establish the Layer 3.

>

> Oh, I see. Thanks!


Hi Martin,

When you submit future patch series, can you try ensuring the code to
be in a completely working state after each patch in the series? This
makes reviewing the patches easier. After the patches get applied,
this also makes tracing bugs (for example, with "git bisect") through
the commit history easier.
Martin Schiller Dec. 10, 2020, 6:27 a.m. UTC | #6
On 2020-12-09 23:11, Xie He wrote:
> On Wed, Dec 9, 2020 at 1:47 AM Xie He <xie.he.0141@gmail.com> wrote:

>> 

>> On Wed, Dec 9, 2020 at 1:41 AM Martin Schiller <ms@dev.tdt.de> wrote:

>> >

>> > Right.

>> > By the way: A "Restart Collision" is in practice a very common event to

>> > establish the Layer 3.

>> 

>> Oh, I see. Thanks!

> 

> Hi Martin,

> 

> When you submit future patch series, can you try ensuring the code to

> be in a completely working state after each patch in the series? This

> makes reviewing the patches easier. After the patches get applied,

> this also makes tracing bugs (for example, with "git bisect") through

> the commit history easier.


Well I thought that's what patch series are for:
Send patches that belong together and should be applied together.

Of course I will try to make each patch work on its own, but this is not
always possible with major changes or ends up in monster patches.
And nobody wants that.

Martin
Xie He Dec. 10, 2020, 9:31 a.m. UTC | #7
On Wed, Dec 9, 2020 at 10:27 PM Martin Schiller <ms@dev.tdt.de> wrote:
>

> > Hi Martin,

> >

> > When you submit future patch series, can you try ensuring the code to

> > be in a completely working state after each patch in the series? This

> > makes reviewing the patches easier. After the patches get applied,

> > this also makes tracing bugs (for example, with "git bisect") through

> > the commit history easier.

>

> Well I thought that's what patch series are for:

> Send patches that belong together and should be applied together.

>

> Of course I will try to make each patch work on its own, but this is not

> always possible with major changes or ends up in monster patches.

> And nobody wants that.


Thanks! I admit that this series is a big change and is not easy to
split up properly. If I were you, I may end up sending a very big
patch first, and then follow up with small patches for "restart
request/confirm handling" and "add/remove x25_neigh on
device-register/unregister instead of device-up/down". (The little
change in x25_link_established should belong to the first big patch
instead of "restart request/confirm handling".)

But making each patch work on its own is indeed preferable. I see
https://www.kernel.org/doc/html/latest/process/submitting-patches.html
says:
When dividing your change into a series of patches, take special care
to ensure that the kernel builds and runs properly after each patch in
the series. Developers using git bisect to track down a problem can
end up splitting your patch series at any point; they will not thank
you if you introduce bugs in the middle.
diff mbox series

Patch

diff --git a/net/x25/x25_link.c b/net/x25/x25_link.c
index 11e868aa625d..f92073f3cb11 100644
--- a/net/x25/x25_link.c
+++ b/net/x25/x25_link.c
@@ -74,16 +74,43 @@  void x25_link_control(struct sk_buff *skb, struct x25_neigh *nb,
 
 	switch (frametype) {
 	case X25_RESTART_REQUEST:
-		confirm = !x25_t20timer_pending(nb);
-		x25_stop_t20timer(nb);
-		nb->state = X25_LINK_STATE_3;
-		if (confirm)
+		switch (nb->state) {
+		case X25_LINK_STATE_2:
+			confirm = !x25_t20timer_pending(nb);
+			x25_stop_t20timer(nb);
+			nb->state = X25_LINK_STATE_3;
+			if (confirm)
+				x25_transmit_restart_confirmation(nb);
+			break;
+		case X25_LINK_STATE_3:
+			/* clear existing virtual calls */
+			x25_kill_by_neigh(nb);
+
 			x25_transmit_restart_confirmation(nb);
+			break;
+		}
 		break;
 
 	case X25_RESTART_CONFIRMATION:
-		x25_stop_t20timer(nb);
-		nb->state = X25_LINK_STATE_3;
+		switch (nb->state) {
+		case X25_LINK_STATE_2:
+			if (x25_t20timer_pending(nb)) {
+				x25_stop_t20timer(nb);
+				nb->state = X25_LINK_STATE_3;
+			} else {
+				x25_transmit_restart_request(nb);
+				x25_start_t20timer(nb);
+			}
+			break;
+		case X25_LINK_STATE_3:
+			/* clear existing virtual calls */
+			x25_kill_by_neigh(nb);
+
+			x25_transmit_restart_request(nb);
+			nb->state = X25_LINK_STATE_2;
+			x25_start_t20timer(nb);
+			break;
+		}
 		break;
 
 	case X25_DIAGNOSTIC:
@@ -214,8 +241,6 @@  void x25_link_established(struct x25_neigh *nb)
 {
 	switch (nb->state) {
 	case X25_LINK_STATE_0:
-		nb->state = X25_LINK_STATE_2;
-		break;
 	case X25_LINK_STATE_1:
 		x25_transmit_restart_request(nb);
 		nb->state = X25_LINK_STATE_2;