Project

General

Profile

Actions

Bug #5035

closed

IP fragmentation causes ndncatchunks to time out indefinitely

Added by Anonymous over 4 years ago. Updated over 4 years ago.

Status:
Rejected
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:

Description

How to reproduce:

  1. Run ndnputchunks with the default packet size (4400 Bytes)
  2. Create some packet loss via tc:
sudo tc qdisc add dev $IF root handle 1:0 netem delay 10ms loss 5%

Then catchunks will retransmit the same packets forever until the retry limit is reached:

Retransmitting segment #273
# of retries for segment #273 is 12
Retransmitting segment #274
# of retries for segment #274 is 12
Retransmitting segment #276
# of retries for segment #276 is 12
Retransmitting segment #277
# of retries for segment #277 is 12
Retransmitting segment #278
# of retries for segment #278 is 12
Retransmitting segment #279
# of retries for segment #279 is 12
Retransmitting segment #280
# of retries for segment #280 is 12
Retransmitting segment #281
# of retries for segment #281 is 12
Retransmitting segment #435

Looking at the NFD logs, the Data packet seems to dissappear somewhere between the second Forwarder and the first Forwarder. Meaning NFD2 sends out Data packet to the udp face, but it never arrives at NFD1 (at least not according to the debug log).

The problem completely goes away when the Data packet is small enough not to be fragmented.

ndnputchunks -s 1100 /2 < 2m.txt

Any ideas what might cause this?


Files

nfd1.log.txt (771 KB) nfd1.log.txt Davide Pesavento, 10/27/2019 09:41 AM
nfd2.log.txt (1.09 MB) nfd2.log.txt Davide Pesavento, 10/27/2019 09:41 AM
trace.pcapng (4.11 MB) trace.pcapng Davide Pesavento, 10/27/2019 09:41 AM
nfd2_trace.log (1.24 MB) nfd2_trace.log Anonymous, 10/27/2019 11:19 AM
host1_trace_new.pcapng (1.54 MB) host1_trace_new.pcapng Anonymous, 10/27/2019 11:19 AM
nfd1_trace.log (1.17 MB) nfd1_trace.log Anonymous, 10/27/2019 11:19 AM
catchunks.log (24.3 KB) catchunks.log Anonymous, 10/27/2019 11:26 AM
nfd_real.log (996 KB) nfd_real.log Anonymous, 10/27/2019 08:28 PM
real_trace.pcapng (1.38 MB) real_trace.pcapng Anonymous, 10/27/2019 08:28 PM
collectStats.txt (3.79 KB) collectStats.txt John DeHart, 10/29/2019 02:40 PM
sysctl.txt (2.12 KB) sysctl.txt John DeHart, 10/29/2019 02:55 PM
collectStats.Nov132019.txt (3.73 KB) collectStats.Nov132019.txt John DeHart, 11/13/2019 10:19 AM

Related issues 1 (0 open1 closed)

Related to NFD - Feature #4005: NDNLPv2 fragmentation instead of IP fragmentation on UDP tunnelsClosedEric Newberry

Actions
Actions #1

Updated by Davide Pesavento over 4 years ago

  • Tracker changed from Task to Bug
  • Subject changed from IP Fragmentation causes catchunks to time out indefinitely to IP fragmentation causes ndncatchunks to time out indefinitely
  • Start date deleted (10/26/2019)

Looking at the NFD logs, the Data packet seems to dissappear somewhere between the second Forwarder and the first Forwarder. Meaning NFD2 sends out Data packet to the udp face, but it never arrives at NFD1 (at least not according to the debug log).

Can you describe the topology? I don't know what NFD1/2 refers to.

Actions #2

Updated by Anonymous over 4 years ago

Consumer -- NFD1 -- NFD2 -- Producer

The two NFD's are connected via UDP tunnel.

Actions #3

Updated by Davide Pesavento over 4 years ago

Ok, well, then I suppose the Data packet "disappears" because one or more fragments of the original UDP datagram are lost. Not sure why retransmissions aren't working though, maybe too much retx suppression in the strategy?

A packet trace in pcap format, together with TRACE or DEBUG logs from both NFDs, would help a lot in understanding what's going on here.

Actions #4

Updated by Anonymous over 4 years ago

Davide Pesavento wrote:

Ok, well, then I suppose the Data packet "disappears" because one or more fragments of the original UDP datagram are lost. Not sure why retransmissions aren't working though, maybe too much retx suppression in the strategy?

I don't think so. How likely is it that a fragment gets lost 50 times in a row?

And the retransmissions are working: there are new Interests being sent out.

I checked the strategy, and the retransmissions are being forwarded, not suppressed.

A packet trace in pcap format, together with TRACE or DEBUG logs from both NFDs, would help a lot in understanding what's going on here.

Okay, let me get those.

Actions #5

Updated by Anonymous over 4 years ago

  • File measurements.zip added

Here's the 2 NFD logs + pcap trace.

Actions #6

Updated by Anonymous over 4 years ago

Some more data on the chunk size:

By default it is 4400 Bytes, 4746 including NDN headers, which thens takes up 4 packets, 4891 Bytes including IP headers.

Changing the putchunks size to 1100 gives a UDP payload of 1447 (with NDN headers), which fits comfortable into an Ethernet packet.

Actions #8

Updated by Anonymous over 4 years ago

Btw, the problem doesn't seam to happen when using a TCP tunnel. I assume because TCP does the fragmentation for you?

Actions #9

Updated by Davide Pesavento over 4 years ago

  • File deleted (measurements.zip)
Actions #10

Updated by Davide Pesavento over 4 years ago

Klaus Schneider wrote:

Btw, the problem doesn't seam to happen when using a TCP tunnel. I assume because TCP does the fragmentation for you?

TCP avoids IP fragmentation altogether by producing segments smaller than the PMTU.

Actions #11

Updated by Teng Liang over 4 years ago

Don't we have the NDNLP fragmentation supported?

Actions #12

Updated by Anonymous over 4 years ago

Teng Liang wrote:

Don't we have the NDNLP fragmentation supported?

No idea.

Maybe that's whats causing the issue?

Actions #13

Updated by Anonymous over 4 years ago

Anyways, for now I suggest to reduce the default chunk size of putchunks: https://gerrit.named-data.net/c/ndn-tools/+/5780

Feel free to review.

Actions #14

Updated by Davide Pesavento over 4 years ago

Klaus Schneider wrote:

Teng Liang wrote:

Don't we have the NDNLP fragmentation supported?

No idea.

Maybe that's whats causing the issue?

I don't see any NDNLP in the packet trace, so no, it's completely unrelated.

Actions #16

Updated by Davide Pesavento over 4 years ago

Klaus Schneider wrote:

Davide Pesavento wrote:

Ok, well, then I suppose the Data packet "disappears" because one or more fragments of the original UDP datagram are lost. Not sure why retransmissions aren't working though, maybe too much retx suppression in the strategy?

I don't think so. How likely is it that a fragment gets lost 50 times in a row?

I was just speculating, given that I didn't have the logs.

By looking at the logs and packet trace, some Data packets are indeed not received by NFD because one or more fragments are lost in transit, therefore reassembly times out and the kernel discards the whole packet. However, there are some Data packets that are reassembled correctly by the host (all fragments are received) but do not show up in NFD for unknown reasons.

Actions #17

Updated by Anonymous over 4 years ago

Verbose output for ndncatchunks.

Actions #18

Updated by Anonymous over 4 years ago

Another note: The problem only happens when there is packet loss on the channel. But it's being triggered by loss as small as 1% each way.

Actions #19

Updated by Anonymous over 4 years ago

We still need to test whether this happens on a real Wifi router as well.

The current results are all from tc netem + virtualbox.

Actions #20

Updated by Davide Pesavento over 4 years ago

I can't explain this behavior. At some point NFD1 stops received anything at all from the UDP socket, even though the host does continue to receive Data packets (and most are reassembled correctly).

Updated by Anonymous over 4 years ago

Just ran it on real hardware: two laptops + wifi router.

It's the same issue.

  • chunk size=4400 -> Indefinite retransmissions,
  • chunk size=1100 -> Works without any issue.

The traces were both take at the consumer laptop.

Actions #22

Updated by Anonymous over 4 years ago

Another replication of this issue from one of our lab servers -> Hobo Router -> Ivoosh Server.

klaus@postino:~/ndn-tools$ ./build/bin/ndncatchunks -f -v /ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/50.m4s > /dev/null 
RTT estimator parameters:
    Alpha = 0.125
    Beta = 0.25
    K = 8
    Initial RTO = 1000 milliseconds
    Min RTO = 200 milliseconds
    Max RTO = 60000 milliseconds
    Backoff multiplier = 2
Pipeline parameters:
    Request fresh content = yes
    Interest lifetime = 4000 milliseconds
    Max retries on timeout or Nack = 15
    Initial congestion window size = 2
    Initial slow start threshold = 1.79769e+308
    Additive increase step = 1
    Multiplicative decrease factor = 0.5
    RTO check interval = 10 milliseconds
    React to congestion marks = yes
    Conservative window adaptation = yes
    Resetting window to ssthresh upon loss event
    Cubic beta = 0.7
    Fast convergence = no
Data: Name: /ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/50.m4s/32=metadata/%FD%00%00%01n%16%01%CC2/%00%00
MetaInfo: ContentType: 0, FreshnessPeriod: 10 milliseconds
Content: (size: 75)
Signature: (type: SignatureSha256WithRsa, value_length: 256)

Discovered Data version: %FD%01
Requesting segment #0
Requesting segment #1
Retransmitting segment #1
Retransmitting segment #0
Retransmitting segment #1
# of retries for segment #1 is 2
Retransmitting segment #0
# of retries for segment #0 is 2
Retransmitting segment #1
# of retries for segment #1 is 3
Retransmitting segment #0
[...]
# of retries for segment #0 is 15
Retransmitting segment #1
Retransmitting segment #0
ERROR: Fetching terminated but no final segment number has been found
klaus@postino:~/ndn-tools$ 

Packet size is 8135. Moreover, when getting the Data from the cache everything works fine. Not sure if there's anything magical about the cache, or if the problem is just triggered by the first packet loss.

Actions #23

Updated by Anonymous over 4 years ago

Correction: The testbed case is actually a different issue caused by using the "-f" (Must be fresh) parameter. The Data arrives but enters the "onDataUnsolicited" pipeline:

1572379348.856885 DEBUG: [nfd.Forwarder] onIncomingInterest in=(269,0) interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379348.856945 DEBUG: [nfd.Forwarder] onContentStoreMiss interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379348.856972 DEBUG: [nfd.Forwarder] onOutgoingInterest out=(267,0) interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379348.857039 DEBUG: [nfd.BestRouteStrategy2] /ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00?MustBeFresh&Nonce=200820388 from=(269,0) retransmit-retry-to=(267,0)
1572379348.860519 DEBUG: [nfd.Forwarder] onIncomingData in=(267,0) data=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379348.860556 DEBUG: [nfd.Forwarder] onDataUnsolicited in=(267,0) data=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00 decision=drop
1572379349.865327 DEBUG: [nfd.Forwarder] onIncomingInterest in=(269,0) interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379349.865368 DEBUG: [nfd.Forwarder] onContentStoreMiss interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379349.865406 DEBUG: [nfd.Forwarder] onOutgoingInterest out=(267,0) interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379349.865449 DEBUG: [nfd.BestRouteStrategy2] /ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00?MustBeFresh&Nonce=3677101228 from=(269,0) retransmit-retry-to=(267,0)
1572379349.868973 DEBUG: [nfd.Forwarder] onIncomingData in=(267,0) data=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379349.869007 DEBUG: [nfd.Forwarder] onDataUnsolicited in=(267,0) data=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00 decision=drop
1572379350.874029 DEBUG: [nfd.Forwarder] onIncomingInterest in=(269,0) interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379350.874074 DEBUG: [nfd.Forwarder] onContentStoreMiss interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379350.874114 DEBUG: [nfd.Forwarder] onOutgoingInterest out=(267,0) interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379350.874161 DEBUG: [nfd.BestRouteStrategy2] /ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00?MustBeFresh&Nonce=3012824239 from=(269,0) retransmit-retry-to=(267,0)
1572379350.877275 DEBUG: [nfd.Forwarder] onIncomingData in=(267,0) data=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379350.877308 DEBUG: [nfd.Forwarder] onDataUnsolicited in=(267,0) data=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00 decision=drop
1572379351.882853 DEBUG: [nfd.Forwarder] onIncomingInterest in=(269,0) interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379351.882896 DEBUG: [nfd.Forwarder] onContentStoreMiss interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379351.882926 DEBUG: [nfd.Forwarder] onOutgoingInterest out=(267,0) interest=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379351.882958 DEBUG: [nfd.BestRouteStrategy2] /ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00?MustBeFresh&Nonce=936762843 from=(269,0) retransmit-retry-to=(267,0)
1572379351.886581 DEBUG: [nfd.Forwarder] onIncomingData in=(267,0) data=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00
1572379351.886617 DEBUG: [nfd.Forwarder] onDataUnsolicited in=(267,0) data=/ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/82.m4s/%FD%02/%00%00 decision=drop
Actions #24

Updated by Anonymous over 4 years ago

Okay, I figured out the original issue: It's due to an inssuficient Linux buffer size to reassemble the packets.

It keeps all IP fragments in a buffer, and if too many packets are lost, the buffer clogs up, and all re-assemblies fail.

It can be fixed at the end-host by increasing the buffer size (orig. value is 262144):

sudo sysctl net.ipv4.ipfrag_high_thresh=26214400
Actions #25

Updated by Anonymous over 4 years ago

Might also be useful to reduce the maximum time these fragments stay in the buffer (default is 30 seconds) to the Interest lifetime:

net.ipv4.ipfrag_time = 4
Actions #26

Updated by Anonymous over 4 years ago

The earlier problem with "/build/bin/ndncatchunks -f -v /ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/50.m4s" is because the server application does not set a freshness value.

To quote Davide:

There is no freshness period, which means the data is automatically stale -> a MustBeFresh interest cannot be satisfied -> data is treated as unsolicited

Actions #27

Updated by Junxiao Shi over 4 years ago

The earlier problem with "/build/bin/ndncatchunks -f -v /ndn/web/video/ndn_and_vehicular_network/hls/h264_1080p/50.m4s" is because the server application does not set a freshness value.

Blame ivoosh for this. I entered the bug last week: https://github.com/chavoosh/ndn-mongo-fileserver/issues/1

There is no freshness period, which means the data is automatically stale -> a MustBeFresh interest cannot be satisfied -> data is treated as unsolicited

I hate this rule as much as you do. https://twitter.com/yoursunny/status/1187239989849001985


It can be fixed at the end-host by increasing the buffer size (orig. value is 262144):

sudo sysctl net.ipv4.ipfrag_high_thresh=26214400

Shall we recommend NDNOPS to apply this setting on testbed nodes, given some end hosts are connecting via TCP?

Actions #28

Updated by Anonymous over 4 years ago

Shall we recommend NDNOPS to apply this setting on testbed nodes, given some end hosts are connecting via TCP?

For TCP there won't be any difference, since it doesn't need IP reassembly.

I would say: yes, apply it on all testbed routers.

Actions #29

Updated by Anonymous over 4 years ago

P.S. I asked John to get some statistics about reassembly failures for all testbed nodes. Will post them here once I got the data.

Actions #30

Updated by Davide Pesavento over 4 years ago

Klaus Schneider wrote:

Shall we recommend NDNOPS to apply this setting on testbed nodes, given some end hosts are connecting via TCP?

For TCP there won't be any difference, since it doesn't need IP reassembly.

It doesn't matter how end hosts are connected. All (or almost) testbed routers use UDP between them, so they all must reassemble IP packets, and will run into this issue if there's packet loss on the link.

Actions #31

Updated by John DeHart over 4 years ago

Here are some stats from NDN Testbed nodes where I ran 'netstat -s | grep reass" on each accessible node.

Actions #32

Updated by Anonymous over 4 years ago

Looking at these numbers, I'd say there's definitely something wrong at UCLA, LIP6, and probably ANYANG.

Actions #33

Updated by John DeHart over 4 years ago

And here is the current state of net.ipv4.ipfrag_high_thresh on each testbed node.

Actions #34

Updated by John DeHart over 4 years ago

I have updated all accessible testbed nods to have
net.ipv4.ipfrag_time = 10
net.ipv4.ipfrag_high_thresh = 26214400

Actions #35

Updated by Beichuan Zhang over 4 years ago

Given UDP tunnels are frequently used, NFD can check this setting at startup and print a warning if it's default value.

An alternative solution is to enable NDNLP for UDP and make sure NFD only sends out packets within MTU limit to avoid IP fragmentation/reassembly.

Actions #36

Updated by Davide Pesavento over 4 years ago

  • Status changed from New to Rejected

Beichuan Zhang wrote:

An alternative solution is to enable NDNLP for UDP and make sure NFD only sends out packets within MTU limit to avoid IP fragmentation/reassembly.

At a recent NDN call, this is the approach we decided to implement on the testbed. And this was in fact one of the main reasons we added support in NFD for limiting the MTU per face, in order to trigger NDNLP fragmentation (see #4005).

So I'm going to close this issue since the original problem and its root cause are outside NFD's control.

Actions #37

Updated by Davide Pesavento over 4 years ago

  • Related to Feature #4005: NDNLPv2 fragmentation instead of IP fragmentation on UDP tunnels added
Actions #38

Updated by John DeHart over 4 years ago

Here are the reassembly stats from testbed nodes today.

Actions

Also available in: Atom PDF