Bug #2317
closedNDNLP totalLength exceeds MTU causing IP fragmentation
100%
Description
I have a Data object of 1MB. I want to fragment the Data into multiple segments of 9000bytes and send them over a tunnel. At the sending side, I slice the Data into multiple fragments in daemon/face/datagram-face.hpp
static const size_t MTU = 9000;
unique_ptrndnlp::Slicer m_slicer;
m_slicer.reset(new ndnlp::Slicer(MTU));
ndnlp::PacketArray pa = m_slicer->slice(payload);
for (const auto& packet : *pa) {
m_socket->async_send(boost::asio::buffer(packet.wire(), packet.size()),
bind(&DatagramFace::handleSend, this, _1, _2, packet));
I expect the slicer to take the payload and create fragments of 9000bytes, including all the headers. Instead, it takes MTU sized payloads and appends the link layer header on it. This makes the fragment larger than 9000 bytes and causes IP fragmentation. In daemon/face/ndnlp-slicer.cpp function Slicer_encodeFragment, I added
NFD_LOG_WARN("***total fragment size " << totalLength);
Output: ***total fragment size 9024
Updated by Junxiao Shi almost 10 years ago
- Assignee set to Junxiao Shi
- Priority changed from Normal to Low
- Target version set to v0.3
The mtu passed to Slicer constructor shall cover NDNLP header, but does not cover IP header or UDP header.
If you are running NDNLP over a UDP tunnel, the overhead of IP header and UDP header must be deducted before constructing Slicer.
I'll update Doxygen to reflect this.
Another problem is, Slicer_encodeFragment
shouldn't create a fragment (NDNLP header and payload) that exceeds mtu.
This bug is caused by an incorrect assumption in Slicer::estimateOverhead
: overhead is computed with assumption that FragIndex and FragCount are encoded as fixed length, but they have been changed to variable length. This would cause a maximum different of 14 octets.
However, this Bug shouldn't be caused by this problem, because line 121 has an assertion that fragment size (with NDNLP header but not IP header or UDP header) does not exceed MTU.
Updated by Davide Pesavento almost 10 years ago
Agreed, this is not an NDNLP bug. You should adjust the MTU passed to Slicer
constructor according to the lower layer protocols you're using. Maybe the constructor argument should be renamed to maxFragmentSize
to avoid confusion.
The overhead estimation bug should be fixed regardless.
Updated by Junxiao Shi almost 10 years ago
- Status changed from New to In Progress
Updated by Junxiao Shi almost 10 years ago
- Status changed from In Progress to Code review
- % Done changed from 0 to 100
Updated by Junxiao Shi almost 10 years ago
- Status changed from Code review to Closed