Project

General

Profile

Actions

Feature #3823

closed

Feature #1624: Design and Implement Congestion Control

Congestion Control: design Local Link Loss Detection

Added by Anonymous over 7 years ago. Updated over 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
Protocol
Target version:
Start date:
Due date:
% Done:

100%

Estimated time:
6.00 h

Description

Links like UDP tunnels or wireless links should be able to locally detect a packet loss and signal that information to the forwarding strategy. The following design is based on:

The implementation consists of 2 parts:

  1. Detecting the loss within NDNLP. This should be done with positive acknowledgements, so the sender knows for sure if the packet was either received or lost (no undetectable losses like with NACKs). The detection can be based on sequence numbers (e.g. detecting a loss after at least 3 out-of-order packets) together with some signaling when the link goes idle. Or one can use gaps in sequence numbers together with local link timeouts. Optionally, the link loss detection can perform local re-transmissions, but it is not necessary. In fact, I would argue for letting the strategy layer decide about retransmissions.

  2. Signal the loss to the forwarding strategy. This could be done with a simple callback onLinkLoss(), similar to onIncomingData() or onNACK(). The strategy can then decide how to handle the link loss (e.g. re-transmitting, or signaling it further downstream by congestion marking or NACKs). The link layer should also tell the strategy whether the loss is likely caused by congestion (like a UDP tunnel) or not (like a WiFi link), as this influences the signaling decision of the strategy layer (e.g. whether it should mark downstream packets). If a link layer timeout is used, the strategy should be able to know the timeout.

I'd like to thank Davide and Junxiao for helpful comments on the current design.

Link to Google Docs: https://docs.google.com/a/email.arizona.edu/presentation/d/1cP1ya0oEUw1wjpF3SWqaUgLNLx5iH50Y7v2slVTvLOU/edit?usp=sharing


Files

schneider_retreat_pres.pdf (406 KB) schneider_retreat_pres.pdf Anonymous, 11/08/2016 01:13 PM
BELRP-20161122.pptx (48.3 KB) BELRP-20161122.pptx Eric Newberry, 11/22/2016 03:39 PM
BELRP-20161124.pptx (49.8 KB) BELRP-20161124.pptx Eric Newberry, 11/24/2016 04:20 PM
BELRP-Design_Jan18.pdf (64 KB) BELRP-Design_Jan18.pdf Anonymous, 01/19/2017 10:27 PM

Related issues 4 (0 open4 closed)

Related to NFD - Feature #3931: Implement NDNLP Link Reliability ProtocolClosedEric Newberry

Actions
Related to NFD - Feature #4003: FaceMgmt: enable link reliabilityClosedEric Newberry

Actions
Related to NFD - Feature #4004: nfdc face create: reliability optionClosedEric Newberry

Actions
Related to NFD - Task #4391: Congestion Control: Test Local Link Loss DetectionClosedEric Newberry

Actions
Actions #1

Updated by Anonymous over 7 years ago

  • Related to Feature #3797: Congestion Control: generic congestion marks added
Actions #2

Updated by Anonymous over 7 years ago

  • Related to Feature #1624: Design and Implement Congestion Control added
Actions #3

Updated by Eric Newberry over 7 years ago

  • Assignee set to Eric Newberry
Actions #4

Updated by Junxiao Shi over 7 years ago

  • Subject changed from Congestion Control: Design & Implement Local Link Loss Detection to Congestion Control: design Local Link Loss Detection
  • Category set to Protocol
  • Target version set to v0.6
  • Estimated time set to 6.00 h

I've limited this issue to be design only. The design produced from this issue includes packet-level protocol specification, and APIs (see #3784 for an example).

Its implementation will cross several layers (LinkService - Forwarding - Strategy) and thus need separate issues, which be will created after seeing the design.

Actions #5

Updated by Anonymous over 7 years ago

We discussed the following points in today's NFD call:

  • The link loss only needs to be detected for Interest packets, not Data packets.
  • Still all of the three packet types are acknowledged (overhead is low due to ACK piggybacking)
  • Timers need to be maintained only per network layer packet, not per link layer packet.
  • The link layer should specify whether a loss is likely to be due to congestion (e.g. UDP tunnel( or due to bit errors (e.g. WiFi)
Actions #6

Updated by Anonymous over 7 years ago

Just to make everything explicit here. Based on my analysis and evaluation, I propose the following changes to the "Best Effort Link Layer" Tech report (link above):

  1. Only use the one frame number instead having both frame number and packet number in the link layer header. Rationale: the receiver of the link layer frame does not need to know which network layer packet it equates to (it can already figure that out by the name). The link layer sender still maintains a mapping between network layer packets and link layer frames.

  2. RTO setting: use the traditional TCP RTO setting (meanRTT + 4 * varRTT) instead of the higher timeout proposed in the tech report. Rationale: I could not follow the reasoning for the higher timeouts, nor could I replicate the necessity in my experiments. The traditional timeouts worked just fine.

  3. Set the default retx to 0 instead of 3. Rationale: Retransmitting isn't necessary for the current design and each link layer operator should set the retx based on the specific link requirements.

  4. Use both seq. numbers and timeouts to detect link loss (instead of only using seq. numbers). Rationale: Timeouts are necessary as a fallback, in case the link goes idle (then you wouldn't see any gap in seq. numbers after a packet loss).

Two questions need some further design:

  1. Selective ACK vs. Cumulative ACK. The tech report uses selective link ACKs without further discussion. Maybe cumulative ACKs work better?

  2. Number of ACK duplication. The tech report sends each ACK 3 times in the following piggybacked link replies. The rationale is that ACKs are cheaper than the unnecessary retransmissions that they might potentially avoid. However, the tech report doesn't consider the possibility of bursty packet loss which makes it more likely that 3 ACKs in a row are lost. One possible solution is to spread out the ACKs. We should discuss the exact design on this issue.

Actions #7

Updated by Anonymous over 7 years ago

In today's NFD call we agreed on implementing points (1) to (3) as discussed above. We'll discuss the rest at the NDN Retreat.

Moreover, we decided that the reporting of the link loss to the strategy layer should be decoupled from the retransmissions. That is, if the link layer retransmits a packet, it should tell the strategy layer how often the packet was lost, even if the final retransmission is successful.

So the report from the link layer to the strategy should be one of the following:

  1. Packet lost x times (give up after x total transmissions, i.e., x-1 retransmissions)
  2. Packet received, but was previously lost and retransmitted x times.
Actions #8

Updated by Davide Pesavento over 7 years ago

Klaus Schneider wrote:

  • The link layer should specify whether a loss is likely to be due to congestion (e.g. UDP tunnel( or due to bit errors (e.g. WiFi)

If you're talking about NFD's "link layer", it (currently) has no clue whether the underlying link is WiFi or an ethernet cable or any other technology.

Moreover, you made two examples that sit on different layers of the protocol stack. In other words, the two are not mutually exclusive, you can have a UDP tunnel over WiFi...

Actions #9

Updated by Anonymous over 7 years ago

Davide Pesavento wrote:

Klaus Schneider wrote:

  • The link layer should specify whether a loss is likely to be due to congestion (e.g. UDP tunnel( or due to bit errors (e.g. WiFi)

If you're talking about NFD's "link layer", it (currently) has no clue whether the underlying link is WiFi or an ethernet cable or any other technology.

True, but the person operating the router knows that. For example, in the NDN Testbed we know that all "links" are made of one or more hops of wired IP routers.

Moreover, you made two examples that sit on different layers of the protocol stack. In other words, the two are not mutually exclusive, you can have a UDP tunnel over WiFi...

Yes. I was hoping that we can get a away with making the best bet on the the question "Is the loss more likely to be caused by congestion or by bit errors?"

When saying "UDP Tunnel" I implicitly meant "UDP Tunnel over multiple wired links". A UDP tunnel over a single wireless link should be treated just like a wireless link.

A UDP tunnel over multiple concatenated wireless links is more difficult. Here, I would expect some loss to come from congestion and some from bit errors. We need to think about a default value for this case (the conservative approach would be to treat all loss as congestion, and let the link layer retransmit, or use FEC, to recover bit errors). However, a better solution would be to avoid this case and have an NDN router at the end of each wireless link.

Actions #10

Updated by Anonymous over 7 years ago

As discussed on the NDN Retreat (and today's call), we'll do the design with positive acknowledgements for each sent packet (selective) that are piggybacked to reduce overhead.

If there's any more design questions, I will post them here.

Actions #11

Updated by Anonymous over 7 years ago

Here's my presentation from the NDN Retreat (containing the open design questions).

Actions #12

Updated by Eric Newberry over 7 years ago

  • Status changed from New to Code review
  • % Done changed from 0 to 100
Actions #13

Updated by Eric Newberry over 7 years ago

  • % Done changed from 100 to 0

Can someone undo my changes and set this back to "New"? I marked this instead of #3797 as in Code Review by accident.

Actions #14

Updated by Alex Afanasyev over 7 years ago

  • Status changed from Code review to New
Actions #15

Updated by Eric Newberry over 7 years ago

Here is the first revision of the loss detection/recovery design for review.

Actions #16

Updated by Davide Pesavento over 7 years ago

Eric Newberry wrote:

Here is the first revision of the loss detection/recovery design for review.

Looks pretty good to me.

How is the best-effort reliability feature negotiated at the link layer?

Slide 9 (receive process) doesn't say when and how ACKs for incoming fragments are enqueued for subsequent transmission.

"RTO timeout" is redundant, because RTO already stands for "retransmit timeout". Use simply "RTO" (e.g. "...after the RTO expires..."), or alternatively "retx timeout".

Actions #17

Updated by Anonymous over 7 years ago

I'll have a look in a couple days.

Since Eric is doing both implementations, I suggest to finish task #3797 before moving on to this.

Actions #18

Updated by Davide Pesavento over 7 years ago

Also, there's no discussion on the "ACK send timeout" at all. How long should it be? Is it a fixed value or is it dynamically adjusted over time? It seems like it should be tied to the RTO value of the other host somehow...

Should cumulative ACKs be part of the design or were they intentionally left out?

Actions #19

Updated by Anonymous over 7 years ago

Davide Pesavento wrote:

Also, there's no discussion on the "ACK send timeout" at all. How long should it be? Is it a fixed value or is it dynamically adjusted over time? It seems like it should be tied to the RTO value of the other host somehow...

Good point! The ACKs can't be delayed too long without causing a 'spurious' timeout at the sender. This needs some more design.

Should cumulative ACKs be part of the design or were they intentionally left out?

I thought everything should be selective ACK, since they're more reliable and the piggybacking makes them cheap already. Any counter arguments?

I was expecting more of these design questions to pop up. This is why I'm still in favor of finishing task #3797 first.

Actions #20

Updated by Davide Pesavento over 7 years ago

Klaus Schneider wrote:

I thought everything should be selective ACK, since they're more reliable and the piggybacking makes them cheap already. Any counter arguments?

Not really. I just asked because they've been mentioned previously (note-6) but I didn't see them in the design proposal.

Actions #21

Updated by Eric Newberry over 7 years ago

Another revision. This one changes references of "RTO timeout" to "RTO timer" and adds a section on how BELRP is negotiated.

I believe that the loss detection portion of this issue can reviewed currently. However, I agree that the loss signalling portion should wait for #3797.

Actions #22

Updated by Davide Pesavento over 7 years ago

A few random thoughts:

  • How did you decide that the AckQueue timer is 0.25*RTO? What's the rationale?
  • The initial negotiation timeout (200ms) seems very low for some links
  • There needs to be a mechanism to re-negotiate a link's features without restarting the face
  • What if the "negotiation frame" is lost?
  • "All other sends are halted until the host has determined whether to enable BELRP". Why?
Actions #23

Updated by Anonymous over 7 years ago

Some more comments:

  • You only describe the loss detection via timeouts. In addition, we also need a loss detection via gaps in seq. numbers (see the tech report)

  • There should be a notification to the strategy on sucessfully retransmitted frames (received ACK, but was retx at least once).

  • I assume your "sender subsystem" and "receiver subsystem" are both running on the same node. Maybe it's better to split the description between the sender and receiver nodes. The only task of the receiver node should be to send (and piggyback) the acknowledgements.

Actions #24

Updated by Anonymous over 7 years ago

How did you decide that the AckQueue timer is 0.25*RTO? What's the rationale?

Yes, please don't make any such design decisions before discussing them here first. The same goes for the 200ms negotiation timeout.

It's better to just say "we need to decide the length of the ACK send timeout" and wait for the input.

Actions #25

Updated by Eric Newberry over 7 years ago

Davide Pesavento wrote:

A few random thoughts:

  • How did you decide that the AckQueue timer is 0.25*RTO? What's the rationale?

It's a placeholder value until we determine a reasonable value to replace it.

  • The initial negotiation timeout (200ms) seems very low for some links

Same as for the AckQueue timer.

Actions #26

Updated by Eric Newberry over 7 years ago

Klaus Schneider wrote:

Some more comments:

  • You only describe the loss detection via timeouts. In addition, we also need a loss detection via gaps in seq. numbers (see the tech report)

I thought it was decided to only use timeouts for now?

  • There should be a notification to the strategy on sucessfully retransmitted frames (received ACK, but was retx at least once).

This is the difference between "Lost" and "Failure" notifications in the design. "Lost" notifications indicate that the packet was transmitted successfully, but retransmission occurred. "Failure" notifications indicate that the packet was not able to successfully sent, even with retransmissions.

  • I assume your "sender subsystem" and "receiver subsystem" are both running on the same node. Maybe it's better to split the description between the sender and receiver nodes. The only task of the receiver node should be to send (and piggyback) the acknowledgements.

I would disagree. In order for the receiver to acknowledge packets, it must have a system to send the acknowledgements and piggyback them on packets.

Actions #27

Updated by Anonymous over 7 years ago

Eric Newberry wrote:

Klaus Schneider wrote:

Some more comments:

  • You only describe the loss detection via timeouts. In addition, we also need a loss detection via gaps in seq. numbers (see the tech report)

I thought it was decided to only use timeouts for now?

No. The tech report used only sequence numbers and note-6 says to use both seq. numbers and timeouts.

  • There should be a notification to the strategy on sucessfully retransmitted frames (received ACK, but was retx at least once).

This is the difference between "Lost" and "Failure" notifications in the design. "Lost" notifications indicate that the packet was transmitted successfully

I would say that's a bit confusing. Maybe call it "lost" (for failure to transmit) and "retx count" (for success, but retransmissions).

, but retransmission occurred. "Failure" notifications indicate that the packet was not able to successfully sent, even with retransmissions.

  • I assume your "sender subsystem" and "receiver subsystem" are both running on the same node. Maybe it's better to split the description between the sender and receiver nodes. The only task of the receiver node should be to send (and piggyback) the acknowledgements.

I would disagree. In order for the receiver to acknowledge packets, it must have a system to send the acknowledgements and piggyback them on packets.

Yes that makes sense. I'm just saying it would make it clearer to split the logic between sender and receiver node. Imagine the whole link loss detection works only in one direction (in the other direction we send packets, but don't acknowledge or retransmit them). In this case, which functionality is on the sender and which on the receiver?

Related to that, I wouldn't let the "AckQueue timer" depend on the RTO estimate, because this makes it necessary for the receiver to maintain this RTO estimate. If the ACK delay timer uses a fixed value, only the sender needs to know the RTO.

TCP often uses 200ms for this "delayed ack" timer (see last link below), but I would suggest a much smaller value, because 1) the standard is probably outdated, 2) our timers are hop-by-hop not end-to-end. How about 5ms?

In general, I would suggest to base the implementation on the relevant work from the TCP literature, namely piggybacking, TCP SACK, and delayed acknowledgements:

Actions #28

Updated by Eric Newberry over 7 years ago

I've converted the latest design to Google Docs. It can be viewed (but not modified) here:

https://docs.google.com/a/email.arizona.edu/presentation/d/1cP1ya0oEUw1wjpF3SWqaUgLNLx5iH50Y7v2slVTvLOU/edit?usp=sharing

Actions #29

Updated by Eric Newberry over 7 years ago

The Google Docs design has been updated to address the above comments.

Actions #30

Updated by Anonymous about 7 years ago

Thanks a lot. Here are some comments.

I think the pdf should start by saying what the purpose of the is, i.e., creating a local link loss detection for UDP tunnels and wireless links. You can copy/shorten some of my description above.

Also, each of the later section should start with a short description about what the design is indented to do, before going into the details. For example, you could write something like "each frame should be answered by a positive acknowledgement; missing ACKs are interpreted as a packet loss". Then continue with the packet format ("thus we need to add the following headers to NDNLP").

Another example: "To reduce the overhead of ACKs, they are piggybacked on Interest/Data frames" (you don't mention piggybacking at all). Then describe the details of how/how many ACKs are piggybacked on each packet, what happens when ACKs are waiting and the link is idle (the "delayed ack" timer) and so on.

I'm still a bit confused by the differentiation of Sender and Receiver. You should make clear what is being sent (Interest, Data, Nack, Piggybacked ACKs, standalone ACKs)? Both Sender and Receiver process may differ depending on the packet type.

Maybe you should be more explicit on some of the parameters. You can list the default values for maxRtt, maxRetx, "delayed ack" timer, and so on.

Also, we can specify the details of the forwarding strategy notification. That is the function prototype of the callback, like retxNotify(int retx, int maxRetx);

The whole section about the "Negotiation" looks like it's more generic than this link loss notification. Maybe we can have a generic feature negotiation module in NDNLP?

Actions #31

Updated by Anonymous about 7 years ago

I looked over the discussion and slides again and I feel that the design is becoming too complex. We currently have the following parts:

  • Acknowledging each link layer frame
  • Piggybacking each ACK on Interests/Data (with an ack delay timeout for sending standalone ACKs)
  • Identifying packet loss via both sequence numbers and timeouts
  • Enabling ACK redundancy (sending each ACK multiple times)?
  • Storing packets in a link layer "transmit cache" to eventually retransmit them
  • Notifying the strategy via a callback
  • Informing the strategy about certain link information (WiFi vs. UDP tunnel; the employed link timeout)
  • Fragmenting and re-assembling network-layer packets into link-layer frames.
  • A protocol for negotiating the link reliability features

Thus, I suggest the following simplifications:

  1. Outsource the Negotiation part to another design issue/commit. How are other NDNLP features negotiated currently/in the future? The negotiation of link loss detection should be done in the same way. Maybe, as a quick-and-dirty solution simply ignore the link layer header if it isn't supported by both routers that share the link?

  2. Drop the link layer re-transmissions. The strategy sends a packet and receives a callback notification if it was lost. This simplifies both the logic of the link layer design (fewer steps; no more transmit cache needed) and the notification to the strategy layer (fewer parameters in the callback).

If you want in-network re-transmissions, the strategy can do them, and can do them better than the link layer. It has more information (measurement info in the FIB; end-to-end delay/loss/etc.) and more possible choices (re-transmit over a different link) than NDNLP.

Even with these two simplifications there is still a lot of designing to do. But they may help to move to implementation faster. If necessary, we can add the link-layer retx in a later commit.

What do you think?

Actions #32

Updated by Anonymous about 7 years ago

Some smaller points:

  • The distinction we discussed earlier between "single hop WiFi link", "multi-hop WiFi link", "UDP Tunnel over Ethernet", "UDP Tunnel over WiFi" can be simplified to the link specifying "should a loss be interpreted as non-congestive" (= caused by bit errors). The default should be "false", and it can be set to "true" for single-hop WiFi links that have a different way of signaling congestion (by looking at interface queue size).

  • We need to specify the API for the strategy to retrieve information about the link, like the parameter above, the link-layer RTO, or others. This should be different from the callback function which informs the strategy about losses.

  • I think we haven't talked about whether/how to do ACK duplication (see above)

I'll put these up for discussion in an NFD call.

Actions #33

Updated by Eric Newberry about 7 years ago

Klaus Schneider wrote:

I looked over the discussion and slides again and I feel that the design is becoming too complex. We currently have the following parts:

  • Acknowledging each link layer frame
  • Piggybacking each ACK on Interests/Data (with an ack delay timeout for sending standalone ACKs)
  • Identifying packet loss via both sequence numbers and timeouts
  • Enabling ACK redundancy (sending each ACK multiple times)?

Each ACK should only be sent once. Lost ACKs just result in the link-layer packet being retransmitted.

  • Storing packets in a link layer "transmit cache" to eventually retransmit them
  • Notifying the strategy via a callback
  • Informing the strategy about certain link information (WiFi vs. UDP tunnel; the employed link timeout)
  • Fragmenting and re-assembling network-layer packets into link-layer frames.

This is already taken care of by existing features of the link protocol. The design receives packets from this system for transmission and hands received packets over to it after the contained ACKs have been processed.

  • A protocol for negotiating the link reliability features

Thus, I suggest the following simplifications:

  1. Outsource the Negotiation part to another design issue/commit. How are other NDNLP features negotiated currently/in the future? The negotiation of link loss detection should be done in the same way. Maybe, as a quick-and-dirty solution simply ignore the link layer header if it isn't supported by both routers that share the link?

I don't know of any link layer features that are negotiated. The only other one I can think of that involves communication is the yet-to-be-officially-designed-or-implemented BFD.

  1. Drop the link layer re-transmissions. The strategy sends a packet and receives a callback notification if it was lost. This simplifies both the logic of the link layer design (fewer steps; no more transmit cache needed) and the notification to the strategy layer (fewer parameters in the callback).

I personally think of retx happening at the link layer since I've been working with that understanding for so long. However, I have limited experience working with the network layer, so I don't claim to understand which is better at this point.

Actions #34

Updated by Anonymous about 7 years ago

Eric Newberry wrote:

Klaus Schneider wrote:

I looked over the discussion and slides again and I feel that the design is becoming too complex. We currently have the following parts:

  • Acknowledging each link layer frame
  • Piggybacking each ACK on Interests/Data (with an ack delay timeout for sending standalone ACKs)
  • Identifying packet loss via both sequence numbers and timeouts
  • Enabling ACK redundancy (sending each ACK multiple times)?

Each ACK should only be sent once. Lost ACKs just result in the link-layer packet being retransmitted.

You are asserting that we should not use ACK redundancy without engaging in the argument the tech report made in support of its use:

The link-layer packet being retransmitted is a much higher overhead (lost bandwidth of the packet, additional delay, and processing) than piggybacking the ACK sequence number in multiple packets. However, the redundant ACK needs to be sent for every packet while it can only save 'spurious' retransmissions for some of the packets. The question is how many retx can be avoided by ACK redundancy and whether this is a good trade-off.

My point is that we should discuss whether or not to keep this feature.

  • Storing packets in a link layer "transmit cache" to eventually retransmit them
  • Notifying the strategy via a callback
  • Informing the strategy about certain link information (WiFi vs. UDP tunnel; the employed link timeout)
  • Fragmenting and re-assembling network-layer packets into link-layer frames.

This is already taken care of by existing features of the link protocol. The design receives packets from this system for transmission and hands received packets over to it after the contained ACKs have been processed.

Sounds good.

  • A protocol for negotiating the link reliability features

Thus, I suggest the following simplifications:

  1. Outsource the Negotiation part to another design issue/commit. How are other NDNLP features negotiated currently/in the future? The negotiation of link loss detection should be done in the same way. Maybe, as a quick-and-dirty solution simply ignore the link layer header if it isn't supported by both routers that share the link?

I don't know of any link layer features that are negotiated. The only other one I can think of that involves communication is the yet-to-be-officially-designed-or-implemented BFD.

So why does the link loss detection need a negotiation when all the other NDNLP features work without it?

  1. Drop the link layer re-transmissions. The strategy sends a packet and receives a callback notification if it was lost. This simplifies both the logic of the link layer design (fewer steps; no more transmit cache needed) and the notification to the strategy layer (fewer parameters in the callback).

I personally think of retx happening at the link layer since I've been working with that understanding for so long. However, I have limited experience working with the network layer, so I don't claim to understand which is better at this point.

Which layer should do the retransmissions is still an open question and probably worth a longer discussion. Retransmitting at the link-layer hides the losses from the network layer, but introduces a larger and more variable delay. This is similar to WiFi retransmissions hiding the losses from the TCP protocol (but with different results since the NDN network layer can retransmit at each hop, but TCP can't).

My point here is to only implement the link layer loss detection first and worry about the link retx later. The loss detection alone provides a huge number of new possibilities for forwarding strategies to experiment with (retx, trying other links, signaling to downstream routers, recording link quality statistics).

Actions #35

Updated by Eric Newberry about 7 years ago

Klaus Schneider wrote:

Eric Newberry wrote:

Klaus Schneider wrote:

I looked over the discussion and slides again and I feel that the design is becoming too complex. We currently have the following parts:

  • Acknowledging each link layer frame
  • Piggybacking each ACK on Interests/Data (with an ack delay timeout for sending standalone ACKs)
  • Identifying packet loss via both sequence numbers and timeouts
  • Enabling ACK redundancy (sending each ACK multiple times)?

Each ACK should only be sent once. Lost ACKs just result in the link-layer packet being retransmitted.

You are asserting that we should not use ACK redundancy without engaging in the argument the tech report made in support of its use:

The link-layer packet being retransmitted is a much higher overhead (lost bandwidth of the packet, additional delay, and processing) than piggybacking the ACK sequence number in multiple packets. However, the redundant ACK needs to be sent for every packet while it can only save 'spurious' retransmissions for some of the packets. The question is how many retx can be avoided by ACK redundancy and whether this is a good trade-off.

I agree with using multiple ACKs.

My point is that we should discuss whether or not to keep this feature.

  • Storing packets in a link layer "transmit cache" to eventually retransmit them
  • Notifying the strategy via a callback
  • Informing the strategy about certain link information (WiFi vs. UDP tunnel; the employed link timeout)
  • Fragmenting and re-assembling network-layer packets into link-layer frames.

This is already taken care of by existing features of the link protocol. The design receives packets from this system for transmission and hands received packets over to it after the contained ACKs have been processed.

Sounds good.

  • A protocol for negotiating the link reliability features

Thus, I suggest the following simplifications:

  1. Outsource the Negotiation part to another design issue/commit. How are other NDNLP features negotiated currently/in the future? The negotiation of link loss detection should be done in the same way. Maybe, as a quick-and-dirty solution simply ignore the link layer header if it isn't supported by both routers that share the link?

I don't know of any link layer features that are negotiated. The only other one I can think of that involves communication is the yet-to-be-officially-designed-or-implemented BFD.

So why does the link loss detection need a negotiation when all the other NDNLP features work without it?

The other features do not require any negotiation to function properly. It may be possible to do the same with this protocol, but I wasn't able to come up with any designs when I looked into that possibility.

  1. Drop the link layer re-transmissions. The strategy sends a packet and receives a callback notification if it was lost. This simplifies both the logic of the link layer design (fewer steps; no more transmit cache needed) and the notification to the strategy layer (fewer parameters in the callback).

I personally think of retx happening at the link layer since I've been working with that understanding for so long. However, I have limited experience working with the network layer, so I don't claim to understand which is better at this point.

Which layer should do the retransmissions is still an open question and probably worth a longer discussion. Retransmitting at the link-layer hides the losses from the network layer, but introduces a larger and more variable delay. This is similar to WiFi retransmissions hiding the losses from the TCP protocol (but with different results since the NDN network layer can retransmit at each hop, but TCP can't).

My point here is to only implement the link layer loss detection first and worry about the link retx later. The loss detection alone provides a huge number of new possibilities for forwarding strategies to experiment with (retx, trying other links, signaling to downstream routers, recording link quality statistics).

I designed the protocol in accordance with the tech report and our discussion in this issue, so I included retransmissions. They could probably be removed from the protocol without much effort.

Actions #36

Updated by Anonymous about 7 years ago

Results of today's NFD call:

  1. Outsource Negotiation feature in a different issue. Make the link loss detection work without negotiation.

  2. Keep link-layer retx. However, see if we can simplify design.

  3. Decide whether to keep sender RTO measurements in addition to seq. numbers. I would say keep if it's simple to implement.

  4. We didn't have time to talk about ACK Duplication. However, for simplicity I would suggest to leave it out for now.

Actions #37

Updated by Anonymous about 7 years ago

Some more questions:

  • Is there a stand-alone packet in NDNLP that carries only ACKs but neither Interest nor Data? Do we need to specify it here? https://redmine.named-data.net/projects/nfd/wiki/NDNLPv2

  • On slide 7: What is "maxRtt"? Why do we need it?

  • What's the difference between "Sender -- Receiver Process" and "Receiver"?

I edited a few things and removed the slides about negotiation. If still needed, you can find them here: https://docs.google.com/presentation/d/18UV8R23BKY-sI-z8uv9xUnbOoCO_HLY8LJH9eilwd0I/edit#slide=id.g1bdf9f6da7_0_40

Actions #38

Updated by Eric Newberry about 7 years ago

Klaus Schneider wrote:

Some more questions:

This exists in the spec. Interests and Data go into the Fragment field of the LpPacket. A packet without a Fragment field is called an IDLE packet. In my design, I made the Ack a link-layer field.

  • On slide 7: What is "maxRtt"? Why do we need it?

It refers to the maximum number of times a packet may be transmitted before the sender gives up. I meant to use "maxRetx", but probably typed "maxRtt" out of habit. It's been corrected in the slides.

  • What's the difference between "Sender -- Receiver Process" and "Receiver"?

The former is the receive pipeline on the sending node and the latter is on the receiving node. I've renamed the slides "Sender - Recv Subsystem" and "Sender - Recv Process" to avoid confusion.

Actions #39

Updated by Anonymous about 7 years ago

Eric Newberry wrote:

Klaus Schneider wrote:

Some more questions:

This exists in the spec. Interests and Data go into the Fragment field of the LpPacket. A packet without a Fragment field is called an IDLE packet. In my design, I made the Ack a link-layer field.

Maybe you can add the specification for the ACK on the Wiki page?

It should also support to add multiple ACK sequence numbers per packet.

  • On slide 7: What is "maxRtt"? Why do we need it?

It refers to the maximum number of times a packet may be transmitted before the sender gives up. I meant to use "maxRetx", but probably typed "maxRtt" out of habit. It's been corrected in the slides.

Sounds good.

  • What's the difference between "Sender -- Receiver Process" and "Receiver"?

The former is the receive pipeline on the sending node and the latter is on the receiving node. I've renamed the slides "Sender - Recv Subsystem" and "Sender - Recv Process" to avoid confusion.

It's still a bit confusing. Each node is both sending and receiving packets, right?

Some suggestions:

  • Clarify what is being sent/received: the data frames or the ACKs?
  • Clarify the difference between "Fragment NetPkt" (where does NetPkt come from?) and "Receive packet from Transport" (what is transport?)
  • Clarify the difference between network layer packets (Interest/Data), link layer fragments, and link layer frames (containing a network layer packet, an ACK number, or both). Don't use the term "packet" when talking about the link layer frame.
  • What is a "sequence"? I think you mean "sequence number" in most cases.

Why does the "Receiver" only send IDLE packets, but not piggyback on other data frames?

Actions #40

Updated by Eric Newberry about 7 years ago

Klaus Schneider wrote:

Eric Newberry wrote:

Klaus Schneider wrote:

Some more questions:

This exists in the spec. Interests and Data go into the Fragment field of the LpPacket. A packet without a Fragment field is called an IDLE packet. In my design, I made the Ack a link-layer field.

Maybe you can add the specification for the ACK on the Wiki page?

This is usually done after the change is approved. I think we can assume the field is added for now while we're developing the system.

It should also support to add multiple ACK sequence numbers per packet.

We can allow the field to be repeated. In this case, multiple Ack fields could be in each LpPacket.

  • What's the difference between "Sender -- Receiver Process" and "Receiver"?

The former is the receive pipeline on the sending node and the latter is on the receiving node. I've renamed the slides "Sender - Recv Subsystem" and "Sender - Recv Process" to avoid confusion.

It's still a bit confusing. Each node is both sending and receiving packets, right?

Yes. This is why I developed an integrated system earlier, with the expectation that each host will be both a sender and a receiver (both requesting and returning ACKs). This simplifies the design, with identical send and receive pipelines on both endpoints. I still don't really understand why one would want reliability in a single direction.

Some suggestions:

  • Clarify what is being sent/received: the data frames or the ACKs?
  • Clarify the difference between "Fragment NetPkt" (where does NetPkt come from?) and "Receive packet from Transport" (what is transport?)

NetPkt is the network-layer packet, like a Data or Interest. It is submitted to the Face system to be transmitted on the link.

The Transport is a component of the NFD Face system. It handles the protocol-specific functions of the Face. As such, we have an EthernetTransport, a UnicastUdpTransport, a WebSocketTransport, and so on. The systems in my design are located in the LinkService, another portion of the Face system.

  • Clarify the difference between network layer packets (Interest/Data), link layer fragments, and link layer frames (containing a network layer packet, an ACK number, or both). Don't use the term "packet" when talking about the link layer frame.

A link layer frame is referred to as an LpPacket.

  • What is a "sequence"? I think you mean "sequence number" in most cases.

In NDNLP, the sequence number field is called a "Sequence" (with capitalization).

Why does the "Receiver" only send IDLE packets, but not piggyback on other data frames?

I must have overlooked this...

Actions #41

Updated by Anonymous about 7 years ago

Okay, please also put these clarifications in the slides.

Actions #42

Updated by Anonymous about 7 years ago

  • What's the difference between "Sender -- Receiver Process" and "Receiver"?

The former is the receive pipeline on the sending node and the latter is on the receiving node. I've renamed the slides "Sender - Recv Subsystem" and "Sender - Recv Process" to avoid confusion.

It's still a bit confusing. Each node is both sending and receiving packets, right?

Yes. This is why I developed an integrated system earlier, with the expectation that each host will be both a sender and a receiver (both requesting and returning ACKs). This simplifies the design, with identical send and receive pipelines on both endpoints. I still don't really understand why one would want reliability in a single direction.

There is no need for reliability in one direction, but I wanted to clarify which actions happen on which node.

Some suggestions:

  • Clarify what is being sent/received: the data frames or the ACKs?
  • Clarify the difference between "Fragment NetPkt" (where does NetPkt come from?) and "Receive packet from Transport" (what is transport?)

NetPkt is the network-layer packet, like a Data or Interest. It is submitted to the Face system to be transmitted on the link.

You still didn't answer my first question. I think the "Send Subsystem" means that a new LpPacket (link layer frame) has arrived. The "Recv Subsystem" means that a new packet arrived from the network layer (application or different face). Am I correct?

I yes, I suggest renaming these functions to onIncomingNetworkLayerPkt() and onIncomingLPPacket() or sth. similar.

The Transport is a component of the NFD Face system. It handles the protocol-specific functions of the Face. As such, we have an EthernetTransport, a UnicastUdpTransport, a WebSocketTransport, and so on. The systems in my design are located in the LinkService, another portion of the Face system.

Maybe put in the slides that "Transport" is part of each face to avoid confusion with the transport layer or the more general notion of "transporting" packets.

Actually "Receive LpPacket from Face" is much clearer than than "Receive packet from Transport".

  • Clarify the difference between network layer packets (Interest/Data), link layer fragments, and link layer frames (containing a network layer packet, an ACK number, or both). Don't use the term "packet" when talking about the link layer frame.

A link layer frame is referred to as an LpPacket.

Sounds good, but please use it consistently in the slides. Searching for "packet" still reveals some ambiguous use like "Receive packet from Transport" on slide 8.

  • What is a "sequence"? I think you mean "sequence number" in most cases.

In NDNLP, the sequence number field is called a "Sequence" (with capitalization).

I would suggest to rename this field in the slides and in NDNLP to avoid confusion.

A "sequence" is different from a "sequence number", as seen by the following two headers:

  • The TCP header contains sequence numbers which does exactly what we are doing here in NDNLP: Giving a unique identifier to each packet/frame
  • The Ethernet header contains a frame check sequence which is used not to identify the packet, but for error correction.
Actions #43

Updated by Anonymous about 7 years ago

  • Description updated (diff)
Actions #44

Updated by Anonymous about 7 years ago

Minor point: The NDNLP Spec. says that the sequence number is a "fixed-width unsigned integer", the ACK number however is "64-bit non-negative integer".

Shouldn't it be the same length?

Actions #45

Updated by Eric Newberry about 7 years ago

Klaus Schneider wrote:

Minor point: The NDNLP Spec. says that the sequence number is a "fixed-width unsigned integer", the ACK number however is "64-bit non-negative integer".

Shouldn't it be the same length?

Good catch. In the current implementation of NDNLP, a Sequence is defined as a uint64_t. I corrected the slides to use the NDNLP notation.

Actions #46

Updated by Eric Newberry about 7 years ago

Klaus Schneider wrote:

  • What's the difference between "Sender -- Receiver Process" and "Receiver"?

The former is the receive pipeline on the sending node and the latter is on the receiving node. I've renamed the slides "Sender - Recv Subsystem" and "Sender - Recv Process" to avoid confusion.

It's still a bit confusing. Each node is both sending and receiving packets, right?

Yes. This is why I developed an integrated system earlier, with the expectation that each host will be both a sender and a receiver (both requesting and returning ACKs). This simplifies the design, with identical send and receive pipelines on both endpoints. I still don't really understand why one would want reliability in a single direction.

There is no need for reliability in one direction, but I wanted to clarify which actions happen on which node.

Some suggestions:

  • Clarify what is being sent/received: the data frames or the ACKs?
  • Clarify the difference between "Fragment NetPkt" (where does NetPkt come from?) and "Receive packet from Transport" (what is transport?)

NetPkt is the network-layer packet, like a Data or Interest. It is submitted to the Face system to be transmitted on the link.

You still didn't answer my first question. I think the "Send Subsystem" means that a new LpPacket (link layer frame) has arrived. The "Recv Subsystem" means that a new packet arrived from the network layer (application or different face). Am I correct?

I yes, I suggest renaming these functions to onIncomingNetworkLayerPkt() and onIncomingLPPacket() or sth. similar.

The Send Subsystem is used when a network layer packet is going to be transmitted on the link, while the Receive subsystem is used when a link layer frame is received from the link and is going to be passed up to the network layer.

We need to work around existing systems in NDNLP that handle sending and receiving, like fragmentation and reassembly and sequence number assignment. The functions that handle sending and receiving are called sendNetPacket() and doReceivePacket(), respectively. If the notation were changed, I think it would be best to keep with the existing names.

The Transport is a component of the NFD Face system. It handles the protocol-specific functions of the Face. As such, we have an EthernetTransport, a UnicastUdpTransport, a WebSocketTransport, and so on. The systems in my design are located in the LinkService, another portion of the Face system.

Maybe put in the slides that "Transport" is part of each face to avoid confusion with the transport layer or the more general notion of "transporting" packets.

This should be on the "Terminology and Definitions" slide.

Actually "Receive LpPacket from Face" is much clearer than than "Receive packet from Transport".

The LinkService, where this system resides, is part of the Face, as is the Transport. Received frames are directly handed from the Transport to the LinkService. The Face is really just an abstraction and a wrapper around these two classes.

  • Clarify the difference between network layer packets (Interest/Data), link layer fragments, and link layer frames (containing a network layer packet, an ACK number, or both). Don't use the term "packet" when talking about the link layer frame.

A link layer frame is referred to as an LpPacket.

Sounds good, but please use it consistently in the slides. Searching for "packet" still reveals some ambiguous use like "Receive packet from Transport" on slide 8.

Fixed.

  • What is a "sequence"? I think you mean "sequence number" in most cases.

In NDNLP, the sequence number field is called a "Sequence" (with capitalization).

I would suggest to rename this field in the slides and in NDNLP to avoid confusion.

A "sequence" is different from a "sequence number", as seen by the following two headers:

  • The TCP header contains sequence numbers which does exactly what we are doing here in NDNLP: Giving a unique identifier to each packet/frame
  • The Ethernet header contains a frame check sequence which is used not to identify the packet, but for error correction.

You should to bring this up with Junxiao, as he's in charge of the NDNLP design.

Actions #47

Updated by Anonymous about 7 years ago

I've attached the current snapshot of the slides. We discussed the design with Beichuan and we think it's ready to start the implementation now.

Also, here's Junxiao's answer regarding the ACK sizing:

Hi Klaus

NDNLPv2 spec does not require the Sequence number field to have a specific width. It's a per-link decision.

Sequence contains a sequence number that is useful in multiple features.
This field is REQUIRED if any enabled feature is using sequence numbers, otherwise it's OPTIONAL.
Bit width of the sequence is determined on a per-link basis; 8-octet is recommended for today's links.
A host MUST generate consecutive sequence numbers for outgoing packets on the same face.

To choose the Sequence width on a link, one should consider a trade-off between
(1) the Sequence should be wide enough to avoid wrapping sooner than a few RTTs;
(2) the bandwidth consumption of the Sequence field, especially on low-MTU links.

The recommendation of 8-octet applies to most links in Internet infrastructure, home Internet access, data > > center, etc.
Sensor networks and personal area networks may need shorter Sequence field.
Core Internet (100Gbps or above) may need longer Sequence field.

Yours, Junxiao

Actions #48

Updated by Eric Newberry about 7 years ago

We need a TLV-TYPE assigned for Ack. I think 836 works, as the field should be able to be ignored safely. Junxiao, can I go ahead and add this assignment and a description of the BELRP feature to the NDNLPv2 spec?

Actions #49

Updated by Anonymous about 7 years ago

I would say, go ahead and add it.

Actions #50

Updated by Eric Newberry about 7 years ago

The field and feature description have been added to the wiki page.

Actions #51

Updated by Eric Newberry about 7 years ago

A patch adding the Ack field to ndn-cxx has been pushed to Gerrit.

Actions #52

Updated by Eric Newberry about 7 years ago

What information should the callbacks provide to the strategy?

Actions #53

Updated by Davide Pesavento about 7 years ago

Eric Newberry wrote:

We need a TLV-TYPE assigned for Ack. I think 836 works, as the field should be able to be ignored safely. Junxiao, can I go ahead and add this assignment and a description of the BELRP feature to the NDNLPv2 spec?

Well, from what I understand Acks will be used very frequently (on links that enable BELRP), so the TLV-TYPE for Ack sounds like a good candidate for 1-octet encoding...

Actions #54

Updated by Eric Newberry about 7 years ago

Davide Pesavento wrote:

Well, from what I understand Acks will be used very frequently (on links that enable BELRP), so the TLV-TYPE for Ack sounds like a good candidate for 1-octet encoding...

I would agree. However, Acks should be ignorable, and only 2-octet-encoded fields can have this.

Actions #55

Updated by Davide Pesavento about 7 years ago

Eric Newberry wrote:

I would agree. However, Acks should be ignorable, and only 2-octet-encoded fields can have this.

Yeah, that's also true. It seems we have no choice unfortunately.

Actions #56

Updated by Junxiao Shi about 7 years ago

A patch adding the Ack field to ndn-cxx has been pushed to Gerrit.

See note-4: please create two separate issues for:

  • NDNLP implementation (including fields and link service)
  • forwarding changes (getting the signal into strategy)
Actions #57

Updated by Eric Newberry about 7 years ago

  • Related to Feature #3931: Implement NDNLP Link Reliability Protocol added
Actions #58

Updated by Eric Newberry about 7 years ago

  • % Done changed from 30 to 50

In a meeting yesterday (March 14, 2017), Beichuan, Klaus, and I decided to adopt a similar method to TCP to detect packet loss based upon sequence numbers. Under this system, if a specific number of acknowledgements (by default 3) with greater sequence numbers are received, the frame is considered lost. However, when implementing this, I was unable to develop a system that allowed this process to reliably recur for each retransmission of a frame, as frames are retransmitted with the original sequence number of the first transmission and (given constant traffic on the link) any 3 received acknowledgements with greater sequence numbers would likely start another retransmission before the first packet was able to reach the other end of the link. Therefore, the current proposed implementation only allows this process to occur once per frame.

In the paper by S. Vusirikala, et al., separate frame and packet numbers are used to detect gaps in acknowledged sequence numbers. Packet numbers are like our current sequence numbers, and stay constant across retransmissions. However, a different frame number is assigned for each retransmission and acknowledgements are based upon frame number.

Actions #59

Updated by Junxiao Shi about 7 years ago

Reply to note-58:

If LpPackets are delivered out of order, some of them will be considered lost by the sender.

  1. Sender transmits LpPackets with sequence 1,2,3,4 in this order.
  2. Recipient receives in 2,3,4,1 order, and transmits Acks in one LpPacket.
  3. Sender processes Ack fields in 2,3,4,1 order. After processing 2,3,4, there are three sequences greater than 1, so sender considers 1 as lost.

Encoding Acks in order does not help, because {2,3,4} and {1} may appear in separate LpPackets.

I'm not saying this design is wrong, but I want readers to be aware of this behavior.

Actions #60

Updated by Eric Newberry almost 7 years ago

  • % Done changed from 50 to 70

The design in the Google Doc linked in the issue description has been updated to include sequentially-assigned frame numbers, ACKs acknowledging frame numbers instead of sequence numbers, and configurable timer periods and thresholds.

Each transmitted packet will now be assigned a frame number, which is different for every retransmission. This frame number mechanic is based upon the NDN BELRP tech report. ACKs acknowledge frame numbers, which are mapped on the sender of the original frame to fragment's sequence number. This avoids the issue of the sequence number gap detection mechanic almost immediately causing another retransmission of a recently-retransmitted fragment, since it would detect any three (or another configurable threshold) acknowledged sequence numbers greater than the original sequence number as a loss.

Actions #61

Updated by Eric Newberry almost 7 years ago

In order to assign frame numbers to LpPacket, a new TLV-TYPE code will need to be assigned. I would suggest giving "Frame" the code 840, which is an ignorable field (2 least-significant bits are 00). This field can be safely ignored, since the fragment contained within can still be received correctly with or without parsing the LpPacket for a Frame field to acknowledge.

Actions #62

Updated by Junxiao Shi almost 7 years ago

I agree with note-60 and note-61.

Having a separate Frame field allows the sender to choose, at per-packet granularity, whether acknowledgement is wanted, which was supported as Request Link Acknowledgement flag in NDNLPv1.
The immediate implication is that the receiver end of BELRP should not be disabled. Disabling BELRP receiver while the sender is still enabled causes incoming packets to be accepted but not acknowledged, which would trigger the sender to retransmit the LpPacket and eventually give up. Disabling the sender causes Frame field to disappear and thus the receiver would not acknowledge.
The capability of making a per-packet choice can be explored in the future. A possibility is to let strategy opt-out acknowledgement for packets that are time sensitive but loss tolerant, such as FEC-enabled video stream.

It's correct that Frame is an ignorable field. When Frame is ignored, the receiver would not return acknowledgements, which would trigger the sender to retransmit the LpPacket and eventually give up. But this is still better than dropping the LpPacket, in which case the sender still would retransmit.
.

Actions #63

Updated by Eric Newberry almost 7 years ago

The Frame TLV-TYPE has been added to the NDNLPv2 wiki page and the description of the reliability protocol has been updated.

Actions #64

Updated by Junxiao Shi almost 7 years ago

TLV structure about Frame field is missing in "Link Layer Reliability" section. Also, the name Frame is easily confused with "Ethernet frame". Siggestion: FrameNo or FrameNumber.

Actions #65

Updated by Eric Newberry almost 7 years ago

In a meeting on April 4, 2017, Beichuan suggested that we resolve the naming confusion by renaming the Sequence field to FragSequence and Frame to TxSequence. I would like to propose changing the existing LP implementation to use these new field names, as they are more descriptive of what that particular sequence number represents. The existing Sequence field holds a sequence number identifying a particular packet, which does not change between transmissions. Meanwhile, Frame/TxSequence changes with each transmission/retransmission of the packet.

Actions #66

Updated by Junxiao Shi almost 7 years ago

I disagree with renaming the existing Sequence field. That field is meant to be shared between multiple features including fragmentation and packet injection prevention. It is not specific for fragmentation. If it's just for fragmentation, a MessageIdentifier field would be easier to use than Sequence. Also, that field is fully compatible with NDNLPv1 link acknowledgement.

I agree with having the new field named TxSequence. This field is specific to this BELRP design.

Actions #67

Updated by Eric Newberry almost 7 years ago

Since no change to the existing Sequence field is proposed, I agree with the design in note 66. We can leave a potential name change to Sequence for later, but add TxSequence now.

Actions #68

Updated by Eric Newberry almost 7 years ago

The Frame field has been renamed to TxSequence in NDNLPv2 r11 and the TLV structure of the field has been added in r12. In addition, the design Google Doc has been updated to use the new name of the field.

Actions #69

Updated by Junxiao Shi almost 7 years ago

In https://gerrit.named-data.net/#/c/3848/4 review it is claimed that both TxSequence and Sequence are required for every packet. I doubt the necessity of Sequence field and a "map TxSequence to sequence number" step.
When fragmentation is disabled or a network-layer packet fits in a single fragment, only TxSequence is needed.
The sender can operate as described below.

To send a (already fragmented if needed) LpPacket:

  1. assign a TxSequence number
  2. save the LpPacket in the retransmission buffer, set retry count of the entry to zero
  3. transmit the LpPacket

After a loss is detected:

  1. retrieve the retransmission buffer entry
  2. if retry count is over threshold, give up (see below)
  3. increment retry count
  4. assign a new TxSequence number to the LpPacket, update the retransmission buffer entry accordingly
  5. transmit the LpPacket

To give up:

  1. if the LpPacket belongs to a fragmented network layer packets, delete retransmission buffer entries for all fragments
  2. otherwise, delete the retransmission buffer entry

In above steps, determining whether "the LpPacket belongs to a fragmented network layer packets" is the only place where Sequence field is needed. That condition evaluates to false when Sequence is omitted.

Actions #70

Updated by Eric Newberry almost 7 years ago

The design in note 69 feels too complicated to save 10 octets (the size of a wire-encoded sequence number). Since the TxSequence changes with each retransmission, we would need to create a new UnackedFrag object at a new key with every retransmission, as well as update the iterators in the NetPkt.

We could also switch to two different systems, one for fragmented packets with assigned Sequences and one for non-fragmented packets without sequence numbers, but this again seems too complicated.

Actions #71

Updated by Junxiao Shi almost 7 years ago

Since the TxSequence changes with each retransmission, we would need to create a new UnackedFrag object at a new key with every retransmission

Same is true with or without general sequence number. This extra layer of indirection can use something internal to BELRP, instead of relying on the fragmentation feature which is completely unrelated.

We could also switch to two different systems, one for fragmented packets with assigned Sequences and one for non-fragmented packets without sequence numbers

No. BELRP should not depend on fragmentation.

Actions #72

Updated by Eric Newberry over 6 years ago

  • Related to Feature #4003: FaceMgmt: enable link reliability added
Actions #73

Updated by Eric Newberry over 6 years ago

  • Related to Feature #4004: nfdc face create: reliability option added
Actions #74

Updated by Eric Newberry over 6 years ago

The implementation of the loss detection and retransmission system has been fully merged into the NFD codebase. However, strategy notifications have not been implemented, as the design is still pending.

Actions #75

Updated by Anonymous over 6 years ago

Eric Newberry wrote:

The implementation of the loss detection and retransmission system has been fully merged into the NFD codebase. However, strategy notifications have not been implemented, as the design is still pending.

@Eric: Can you figure out which exact design decision is still pending?

How about we start with the most basic strategy notification onDroppedPacket(type, *packet) that tells the strategy which specific packet type Interest/Data/NACK has been dropped, and the strategy can then decide what to do.

Actions #76

Updated by Davide Pesavento over 6 years ago

  • Related to deleted (Feature #3797: Congestion Control: generic congestion marks)
Actions #77

Updated by Anonymous over 6 years ago

Eric is still working on this one.

Actions #78

Updated by Anonymous over 6 years ago

  • Parent task set to #1624
Actions #79

Updated by Eric Newberry over 6 years ago

Davide has suggested using a different strategy notification for each type of network packet (i.e., onDroppedInterest, onDroppedData, onDroppedNack). This seems to be much easier from an implementation perspective. What does everyone think?

The notifications would be:

    onDroppedInterest(inFace, *packet)
    onDroppedData(inFace, *packet)
    onDroppedNack(inFace, *packet)
Actions #80

Updated by Junxiao Shi over 6 years ago

How about we start with the most basic strategy notification onDroppedPacket(type, *packet) that tells the strategy which specific packet type Interest/Data/NACK has been dropped, and the strategy can then decide what to do.

How do you identify which strategy (or strategies) should be informed about a packet loss? Lost Interest is easily attributed to the strategy forwarding it. Lost Data that has satisfied multiple Interests would be attributed to multiple strategies, but the PIT entries may already be gone. Lost Nack is attributed to only one strategy, but the PIT entry may be gone.

Actions #81

Updated by Anonymous over 6 years ago

Eric Newberry wrote:

Davide has suggested using a different strategy notification for each type of network packet (i.e., onDroppedInterest, onDroppedData, onDroppedNack). This seems to be much easier from an implementation perspective. What does everyone think?

The notifications would be:

    onDroppedInterest(inFace, *packet)
    onDroppedData(inFace, *packet)
    onDroppedNack(inFace, *packet)

Sounds good to me.

Actions #82

Updated by Anonymous over 6 years ago

Junxiao Shi wrote:

How about we start with the most basic strategy notification onDroppedPacket(type, *packet) that tells the strategy which specific packet type Interest/Data/NACK has been dropped, and the strategy can then decide what to do.

How do you identify which strategy (or strategies) should be informed about a packet loss?

Lost Interest is easily attributed to the strategy forwarding it.

So nothing to do here.

Lost Data that has satisfied multiple Interests would be attributed to multiple strategies

I think this is already addressed in #4290. For example, one could just ignore the case of multiple strategies, since it is very rare.

, but the PIT entries may already be gone.
Lost Nack is attributed to only one strategy, but the PIT entry may be gone.

Naive solution: Keep the PIT entries around long enough?

This shouldn't be a problem, since the timescale of the onDroppedPacket() notification (which is per link) is lower than the timescale of the average PIT lifetime (which is over the whole path).

Actions #83

Updated by Junxiao Shi over 6 years ago

Lost Nack is attributed to only one strategy, but the PIT entry may be gone.

Naive solution: Keep the PIT entries around long enough?

This shouldn't be a problem, since the timescale of the onDroppedPacket() notification (which is per link) is lower than the timescale of the average PIT lifetime (which is over the whole path).

Per #4369-4, PIT entry is deleted soon after the packet is transmitted. There's no guarantee that a packet loss can be detected fast enough.

Another problem is that: a lost Data/Nack is detected by the upstream, not the downstream. What remedy does a strategy at upstream have? It has no way to inform the downstream to retransmit the Interest.

Actions #84

Updated by Anonymous over 6 years ago

Junxiao Shi wrote:

Lost Nack is attributed to only one strategy, but the PIT entry may be gone.

Naive solution: Keep the PIT entries around long enough?

This shouldn't be a problem, since the timescale of the onDroppedPacket() notification (which is per link) is lower than the timescale of the average PIT lifetime (which is over the whole path).

Per #4369-4, PIT entry is deleted soon after the packet is transmitted. There's no guarantee that a packet loss can be detected fast enough.

It depends on the length of the straggler timer. We can run some experiments with different settings.

Another problem is that: a lost Data/Nack is detected by the upstream, not the downstream. What remedy does a strategy at upstream have? It has no way to inform the downstream to retransmit the Interest.

That's true, the strategy can't do much about that particular Interest.

However, these callbacks might still be useful to let the strategy collect and signal statistical information about the path quality.

Actions #85

Updated by Davide Pesavento over 6 years ago

Klaus Schneider wrote:

Junxiao Shi wrote:

Another problem is that: a lost Data/Nack is detected by the upstream, not the downstream. What remedy does a strategy at upstream have? It has no way to inform the downstream to retransmit the Interest.

That's true, the strategy can't do much about that particular Interest.

However, these callbacks might still be useful to let the strategy collect and signal statistical information about the path quality.

Just a random thought... Given the implementation difficulties encountered in change 4339 (e.g. finding the correct strategy, as pointed out by Junxiao), do we have an actual use case for onDroppedData/Nack? If it's just for stats collection, I don't think we need to involve the strategy, it can be done at the face level.

Actions #86

Updated by Anonymous over 6 years ago

Davide Pesavento wrote:

Just a random thought... Given the implementation difficulties encountered in change 4339 (e.g. finding the correct strategy, as pointed out by Junxiao), do we have an actual use case for onDroppedData/Nack? If it's just for stats collection, I don't think we need to involve the strategy, it can be done at the face level.

Our initial reasoning was that it's cheap/easy to implement onDroppedData/Nack, if we are already implementing onDroppedInterest() anyways.

However, if it turns out to be unreasonably hard, I'm fine with just having onDroppedInterest().

Actions #87

Updated by Davide Pesavento over 6 years ago

Klaus Schneider wrote:

Davide Pesavento wrote:

Just a random thought... Given the implementation difficulties encountered in change 4339 (e.g. finding the correct strategy, as pointed out by Junxiao), do we have an actual use case for onDroppedData/Nack? If it's just for stats collection, I don't think we need to involve the strategy, it can be done at the face level.

Our initial reasoning was that it's cheap/easy to implement onDroppedData/Nack, if we are already implementing onDroppedInterest() anyways.

That doesn't seem to be the case though... For Data packets we can't simply lookup the strategy for the Data name, as it may be different from the strategy that previously handled the corresponding Interest packet. So the notification mechanism for Data would have to be more complicated.

Actions #88

Updated by Anonymous over 6 years ago

Davide Pesavento wrote:

Klaus Schneider wrote:

Davide Pesavento wrote:

Just a random thought... Given the implementation difficulties encountered in change 4339 (e.g. finding the correct strategy, as pointed out by Junxiao), do we have an actual use case for onDroppedData/Nack? If it's just for stats collection, I don't think we need to involve the strategy, it can be done at the face level.

Our initial reasoning was that it's cheap/easy to implement onDroppedData/Nack, if we are already implementing onDroppedInterest() anyways.

That doesn't seem to be the case though... For Data packets we can't simply lookup the strategy for the Data name, as it may be different from the strategy that previously handled the corresponding Interest packet. So the notification mechanism for Data would have to be more complicated.

Okay, then let's just do the onDroppedInterest() notification.

We can think about adding the other two once a use case arises.

Actions #89

Updated by Eric Newberry over 6 years ago

  • % Done changed from 70 to 80

The implementation of the onDroppedInterest() strategy notification has been merged. Is there anything else we need to do for this issue?

Actions #90

Updated by Anonymous over 6 years ago

I think the functionality should be all there.

How confident are you that the notification works as expected? Do we need to do some testing?

Actions #91

Updated by Eric Newberry over 6 years ago

  • Status changed from In Progress to Feedback
  • % Done changed from 80 to 100

Klaus Schneider wrote:

I think the functionality should be all there.

How confident are you that the notification works as expected? Do we need to do some testing?

I expect they work, given that they're just signals being called (and calling other signals, etc.). I was thinking about writing tests for them, but the scenario I thought of seemed too complicated for a unit test.

Also, moving % done to 100% and status to feedback because the issue description seems to just have link detection/retransmission and strategy notifications, although not link type notification (WiFi, etc.). However, I'm guessing that this may be difficult to implement, given the separation of the LinkService from the Transport.

Actions #92

Updated by Anonymous over 6 years ago

Maybe you can do some informal testing, just on your machine?

Run NDNLP on a lossy link, set retx to 0, and then have a strategy print some line whenever an Interest is dropped.

In addition, we'll probably have some change in the future that uses onDroppedInterest(), and we'll probably find any existing bugs then.

Actions #93

Updated by Eric Newberry over 6 years ago

I ran a test, but it appears that onLostPacket is being called for every frame, even if it has been successfully acknowledged on its first transmission.

Actions #94

Updated by Anonymous over 6 years ago

Any idea what might cause this bug?

Actions #95

Updated by Eric Newberry over 6 years ago

No idea. The timeout calling onLpLostPacket should be cancelled when its frame goes out of scope (when the UnackedFrag object containing it is deleted). Given that the initial RTO is 1 second and ndnping is reporting an RTT in the 10s of milliseconds on the link, I am at a loss.

Actions #96

Updated by Anonymous over 6 years ago

Looks like we need a more formal integration test for this?

Actions #97

Updated by Davide Pesavento over 6 years ago

Klaus Schneider wrote:

Looks like we need a more formal integration test for this?

Not necessarily an integration test. A unit test should be able to catch this kind of bugs.

Actions #98

Updated by Anonymous over 6 years ago

  • Status changed from Feedback to Closed

Moving the tests to a new task #4391

Actions #99

Updated by Anonymous over 6 years ago

  • Related to Task #4391: Congestion Control: Test Local Link Loss Detection added
Actions

Also available in: Atom PDF