Feature #1672
open
Added by Davide Pesavento over 10 years ago.
Updated about 5 years ago.
Description
Implement UnixSeqPacketTransport
for use with LinkService.
The UnixSeqPacketTransport
is a subclass of Transport
that communicates with a local application over UNIX SOCK_SEQPACKET socket.
The main advantage of using a SEQPACKET socket is that it preserves message boundaries, therefore allowing us to get rid of the buffering inside the face and all the copying that it involves.
UnixSeqPacketTransport
is always local.
UnixSeqPacketTransport
is always on-demand, because it's application connecting to NFD. There is no persistent or permanent UnixSeqPacketTransport
.
The same kind of face must be provided by ndn-cxx library to applications.
Does is still have a concept of connection?
Alex Afanasyev wrote:
Does is still have a concept of connection?
Yes, SOCK_SEQPACKET is connection-based, just like SOCK_STREAM.
- Start date deleted (
06/17/2014)
- Estimated time set to 6.00 h
SOCK_SEQPACKET is similar to SOCK_DGRAM.
I heard CCNx chooses SOCK_STREAM over SOCK_DGRAM because SOCK_STREAM is actually faster even application needs to copy data.
I don't know whether "fast" means "higher throughput" (when there's lots of messages) or "lower delay" (when there's only a few messages).
Junxiao Shi wrote:
SOCK_SEQPACKET is similar to SOCK_DGRAM.
Not really. See http://pubs.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_10.html :
"The SOCK_SEQPACKET socket type is similar to the SOCK_STREAM type, and is also connection-oriented. The only difference between these types is that record boundaries are maintained using the SOCK_SEQPACKET type."
So they're more similar to SOCK_STREAM than to SOCK_DGRAM.
I don't know whether "fast" means "higher throughput" (when there's lots of messages) or "lower delay" (when there's only a few messages).
I guess the former. Under heavy traffic composed mostly of Interests or small Data packets (i.e. when every packet is much smaller than 8800 octets), a stream socket might perform better because a single read()
call can fill the internal buffer with several packets which can be parsed one after the other with just memory copies, while a seqpacket socket requires a read()
call for each packet (but no additional copies). Since the system call overhead is higher than the cost of a memcpy()
, this scenario is unfavorable to seqpacket sockets. In other scenarios I believe seqpacket will perform better.
In any case, a performance comparison should be done before taking a decision.
20140619 conference call decides that the benefit of SOCK_SEQPACKET should be proved by a benchmark.
This feature is unnecessary unless performance profiling shows socket operation is a bottleneck.
Junxiao Shi wrote:
20140619 conference call decides that the benefit of SOCK_SEQPACKET should be proved by a benchmark.
I totally agree.
This feature is unnecessary unless performance profiling shows socket operation is a bottleneck.
Probably it will never be a bottleneck. But SEQPACKET could result in lower CPU utilization.
This feature is unnecessary unless performance profiling shows socket operation is a bottleneck.
Probably it will never be a bottleneck. But SEQPACKET could result in lower CPU utilization.
CPU time is a scarce resource in small devices such as Raspberry Pi, but most CPU time is used for signing and table operations.
After those are optimized, socket operations could become a bottleneck.
- Priority changed from Normal to Low
- Tracker changed from Task to Feature
- Subject changed from Implement a Unix face using SOCK_SEQPACKET to UnixSeqPacketTransport
- Description updated (diff)
- Target version set to Unsupported
I'm updating the description to fit Face=LinkService+Transport
structure (#3088). note-6 is still applicable.
- Target version deleted (
Unsupported)
Also available in: Atom
PDF