Project

General

Profile

Actions

Feature #3217

open

Avoid memory copy when creating lp::Packet from Fragment

Added by Alex Afanasyev over 8 years ago. Updated about 6 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Base
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:

Description

Adding a fragment into NDNLP packet copies the memory block.
This is a performance bottleneck as identified in #3567-42 benchmark report.

We should avoid this copy operation and use copy-on-write buffer semantics where possible.


Related issues 1 (1 open0 closed)

Blocked by ndn-cxx - Feature #3216: Copy-on-write buffer abstraction and refactor BlockNew

Actions
Actions #1

Updated by Alex Afanasyev over 8 years ago

  • Blocked by Feature #3216: Copy-on-write buffer abstraction and refactor Block added
Actions #2

Updated by Junxiao Shi over 8 years ago

Is there a benchmark to support the claim of this copying of memory block being a bottleneck?

Actions #3

Updated by Anonymous over 8 years ago

I think @susmit shannigrahi shared a study he did at one point. However, it was related to our scientific networking work with huge packets (> 8800 bytes).

Actions #4

Updated by Junxiao Shi almost 8 years ago

  • Subject changed from Reduce memory copy operations in NDNLP to Avoid memory copy when creating lp::Packet from Fragment
  • Description updated (diff)
  • Assignee set to Junxiao Shi

Is there a benchmark to support the claim of this copying of memory block being a bottleneck?

#3567-42 has the benchmark.

I'll investigate a solution.
My solution would be designed for a common case where the Fragment is a complete network layer packet (Block API requires consecutive memory, so there's no easy solution when packet is sliced), and it's most likely not backwards compatible (because lp::Packet would need a reference to Buffer, not just iterators).

Actions #5

Updated by Junxiao Shi almost 8 years ago

  • Assignee deleted (Junxiao Shi)

After some reading, I realize that there's no easy solution for this issue.

My idea was: when a network layer packet is being sent as one fragment, other NDNLPv2 headers are prepended to the underlying Buffer of this network layer packet.

However, there's no safe way to achieve this goal:

  • Block::getBuffer returns a constant Buffer. We cannot mess with this Buffer, because if a network layer packet is being sent out of multiple faces with different NDNLPv2 headers, the modifications would conflict.
  • A mutable Buffer can be obtained by invoking WireEncodable::wireEncode(Encoder&). However, in the common case that the network layer packet has not been modified since received (this is true for most Data), this operation already incurs a memory copy.

A simple copy-on-write buffer that duplicates the entire buffer upon write is also ineffective.

When NFD receives an LpPacket, the headroom before network layer packet contains incoming NDNLPv2 headers.
When NFD wants to prepend some other NDNLPv2 headers before sending the network layer packet, the buffer would be duplicated, even if the headroom of the original buffer is no longer needed.

To solve this problem, the buffer must keep track of which portions are being used.
When the incoming LpPacket is destructed but the network layer packet is retained, the buffer knows the headroom isn't being used, so that outgoing LpPacket can safely reused the headroom.

An improvement is to allow non-consecutive memory in the buffer, so that when NFD sends the network layer packet to multiple faces, each outgoing LpPacket only needs allocation of its NDNLPv2 headers, but the network layer packet isn't copied.
However, this would break Buffer::get (or require a full copy when this API is used).

Actions #6

Updated by Davide Pesavento almost 8 years ago

Junxiao Shi wrote:

An improvement is to allow non-consecutive memory in the buffer, so that when NFD sends the network layer packet to multiple faces, each outgoing LpPacket only needs allocation of its NDNLPv2 headers, but the network layer packet isn't copied.

FWIW, I would support going into this direction. I already suggested the use of scatter/gather I/O (aka vectored I/O) throughout the buffer management and packet handling layers in the past. Boost.Asio natively supports vectored reads and writes from/to sockets.

However, this would break Buffer::get (or require a full copy when this API is used).

Who uses this API? Who really needs to have it?

Ideally this API shouldn't exist at all. Consumers should use iterators (or other vectored APIs), so that they don't have to make assumptions on the buffer memory layout. Otherwise, copying into a linear buffer is acceptable as long as get() is not used on performance-critical code paths (e.g. tests are fine).

Actions #7

Updated by Davide Pesavento about 7 years ago

  • Target version changed from v0.5 to v0.6
Actions #8

Updated by Junxiao Shi over 6 years ago

  • Tracker changed from Task to Feature

I have a new solution.

class LpPacket
{
public:
  /** \brief Encode NDNLP headers and outer Type-Length
   *  \param encoder Encoder or Estimator to prepend the wire encoding
   *  \param fragmentLength TLV-LENGTH of Fragment element
   *  \pre !has<FragmentField>()
   *
   *  This function allows encoding an LpPacket without copying the payload into Fragment field.
   *  The output wire encoding contains the outer Type-Length for LpPacket, headers, and Type-Length for the Fragment element,
   *  but excludes TLV-VALUE of the Fragment element.
   *  This encoding is intended to be transmitted together with an encoded Interest or Data packet via scatter/gather I/O.
   *
   *  If this LpPacket has no header fields, the generated encoding would be empty.
   *
   *  This function cannot be used to encode an IDLE packet. Passing zero as fragmentLength creates a zero-length Fragment element.
   */
  template<encoding::Tag TAG>
  size_t
  encodeHeaders(EncodingImpl<TAG>& encoder, size_t fragmentLength) const;
};

This solution requires modifying Face::Impl and GenericLinkService (and all the way down to Transport) to take advantage of the benefits. It does not depend on #3216.

Actions #9

Updated by Davide Pesavento about 6 years ago

  • Target version deleted (v0.6)
Actions

Also available in: Atom PDF