Project

General

Profile

Task #1278

EthernetFace without promiscuous mode

Added by Junxiao Shi over 5 years ago. Updated almost 5 years ago.

Status:
Closed
Priority:
Low
Category:
Faces
Target version:
Start date:
Due date:
% Done:

100%

Estimated time:
6.00 h

Description

Optimize EthernetFace so that it does not put NIC into promiscuous mode when possible.

This can be achieved by PACKET_ADD_MEMBERSHIP socket option on Linux and ioctl(SIOCADDMULTI) on OSX.


Related issues

Follows NFD - Task #1191: EthernetFace implementationClosed

History

#1 Updated by Davide Pesavento over 5 years ago

Doesn't this procedure require the local network interface to have an IPv4 address?

#2 Updated by Junxiao Shi over 5 years ago

  • Description updated (diff)

This optimization is applied only if the Ethernet multicast group is in IPv4 multicast mapped range, and the NIC has IPv4 enabled. Otherwise, libpcap session should still use promiscuous mode.

#3 Updated by Davide Pesavento over 5 years ago

The current implementation does NOT use promiscuous mode. Instead it generates a BPF program and installs it on the capturing interface. This is much more efficient since incoming packets that do not match the filter are immediately discarded, without having to copy them from kernel to userspace. Only NDN packets (i.e. with ethertype == 0x8624) destined to the multicast address for which the face was configured are allowed to pass the filter and are delivered to NFD. I believe this is the most efficient way of doing it in any case.

The only potential issue we have is that a switch doing IGMP snooping might not forward a multicast packet if we don't explicitly join the multicast group. This has nothing to do with promiscuous mode though.

#4 Updated by Davide Pesavento over 5 years ago

  • Status changed from New to Feedback

#5 Updated by Alex Afanasyev over 5 years ago

I think these are orthogonal issues. My understanding is that the promisc mode forces the driver to receive and process packets that are not destined to the node and BPF filter is just to filter packets that were received by the node. If the current way works (e.g., driver is smart enough to process packets that were destined to multicast address), then it is perfect.

After a brief reading of http://tools.ietf.org/html/rfc4541 I think we don't even need to do anything to fix IGMP snooping (see section 2.1.2), as it specifically applies to IPv4 (not even sure if it applies to IPv6 multicast). So, I believe we don't really need to do anything.

#6 Updated by Junxiao Shi over 5 years ago

  • Status changed from Feedback to Rejected

Davide is right. Code does not call pcap_set_promisc so NIC is not in promiscuous mode.

If multicast works without tcpdump or similar app running on the side, this is fine.

#7 Updated by Davide Pesavento over 5 years ago

Alex Afanasyev wrote:

After a brief reading of http://tools.ietf.org/html/rfc4541 I think we don't even need to do anything to fix IGMP snooping (see section 2.1.2), as it specifically applies to IPv4 (not even sure if it applies to IPv6 multicast). So, I believe we don't really need to do anything.

I believe you're correct. The RFC says (ยง2.1.2):

4) All non-IPv4 multicast packets should continue to be flooded out
to all remaining ports in the forwarding state as per normal IEEE
bridging operations.

Therefore even snooping switches will flood multicast ethernet frames on all ports since NDN packets have their own ethertype.

#8 Updated by Davide Pesavento over 5 years ago

Junxiao Shi wrote:

If multicast works without tcpdump or similar app running on the side, this is fine.

I didn't say that multicast works. In fact, the EthernetFace is currently unable to receive multicast frames, because the kernel/driver don't know we are interested in them.

What I'm saying is that promiscuous mode might not be such a big performance hit, thanks to the BPF filter installed in the kernel that greatly reduces the amount of packets that are copied to userspace (on the other hand the kernel still has to run all packets through the BPF virtual machine, which is not free).
I also think that abusing multicast MAC addresses in this way is a hack and shouldn't be done, unless in conjunction with real IP-level multicast which is not the case here.

Given that NDN multicast frames are flooded be switches anyway (i.e. the bandwidth saving on the network is zero), and that receiving multicast frames requires either promisc mode (which can be costly) or UDP/IP-level hacks (which in my opinion defeats the whole purpose of a purely L2 face), I propose to get rid of multicast on EthernetFace and always use broadcast frames (ff:ff:ff:ff:ff:ff) that do not require promisc mode.

#9 Updated by Junxiao Shi over 5 years ago

  • Status changed from Rejected to New

Reopen this task because multicast does not work.

BPF is in the kernel. Its cost is lower than a userspace filter, but it still consumes CPU.

"Abusing" IP multicast offloads the filtering to the NIC.

I disagree with using broadcast frames, because not all hosts on a broadcast domain are NDN hosts.

#10 Updated by Davide Pesavento over 5 years ago

Junxiao Shi wrote:

BPF is in the kernel. Its cost is lower than a userspace filter, but it still consumes CPU.

"Abusing" IP multicast offloads the filtering to the NIC.

Correct.
Using broadcast frames doesn't put the interface in promisc mode however, therefore the amount of received packets will be significantly smaller.

I disagree with using broadcast frames, because not all hosts on a broadcast domain are NDN hosts.

Our "multicast" frames will be flooded by switches anyway, i.e. they're effectively treated as broadcast, and will reach all attached hosts. So this point is moot.

#11 Updated by Davide Pesavento over 5 years ago

Another (untested) solution could be: call ioctl(SIOCSIFFLAGS) to set the IFF_ALLMULTI flag on the network device. This tells the NIC to accept all multicast packets and hand them to the kernel, without fiddling with IP-level stuff.

#12 Updated by Junxiao Shi over 5 years ago

Our "multicast" frames will be flooded by switches anyway, i.e. they're effectively treated as broadcast, and will reach all attached hosts. So this point is moot.

The difference is: Broadcast frames will reach the kernel of all hosts, and wastes CPU on non-NDN hosts. Multicast frames will reach the NIC of all hosts, but goes to the kernel on NDN hosts only.

Another (untested) solution could be: call ioctl(SIOCSIFFLAGS) to set the IFF_ALLMULTI flag on the network device. This tells the NIC to accept all multicast packets and hand them to the kernel, without fiddling with IP-level stuff.

This receives all multicast frames, including those used by IP multicast in groups not joined by the host. NDNLP only needs one specific multicast group.

#13 Updated by Davide Pesavento over 5 years ago

Junxiao Shi wrote:

This receives all multicast frames, including those used by IP multicast in groups not joined by the host. NDNLP only needs one specific multicast group.

Yes, I know, and it looks like a very reasonable and acceptable compromise to me... Extraneous (non-NDN) multicast frames will continue to be filtered by BPF.

#14 Updated by Davide Pesavento over 5 years ago

Yet another way: we can use ioctl(... SIOCADDMULTI ...) or even better setsockopt(... SOL_PACKET, PACKET_ADD_MEMBERSHIP ...) on Linux.

#15 Updated by Junxiao Shi over 5 years ago

SIOCADDMULTI looks nice. Please write a small program and give it a try.

#16 Updated by Davide Pesavento over 5 years ago

  • Status changed from New to In Progress
  • % Done changed from 0 to 50

#17 Updated by Junxiao Shi over 5 years ago

@Davide, which solution are you using?

If you are using SIOCADDMULTI (and confirm it works), please update "Task Description" to reflect that.

#18 Updated by Davide Pesavento over 5 years ago

  • Target version changed from v0.1 to v0.2

#19 Updated by Junxiao Shi over 5 years ago

  • Target version changed from v0.2 to v0.1

Task cannot be moved to another version without being discussed in a conference call or in the nfd-dev mailing list.

#20 Updated by Junxiao Shi over 5 years ago

  • Status changed from In Progress to New
  • Assignee deleted (Davide Pesavento)
  • Target version changed from v0.1 to v0.2
  • % Done changed from 50 to 0

20140326 conference call agrees to defer this task to Version 2.

#21 Updated by Davide Pesavento over 5 years ago

  • Assignee set to Davide Pesavento

#22 Updated by Junxiao Shi over 5 years ago

  • Target version changed from v0.2 to v0.3

20140609 conference call decides to defer this Task.

#23 Updated by Junxiao Shi almost 5 years ago

  • Priority changed from Normal to Low

I asked @Davide about the plan for this Task. The reply is:

I think I already tried it months ago when we were discussing a solution. And I don't remember having problems so I guess it works.
The best thing in my opinion is using the PACKET_ADD_MEMBERSHIP socket option on Linux and the SIOCADDMULTI ioctl on OSX.

Regarding timeline:

I might be able to take another look at it next month, but I cannot guarantee anything. We're all quite busy until early December.

I think this Task should be retained in v0.3 for now, but as Low priority.

#24 Updated by Davide Pesavento almost 5 years ago

  • Status changed from New to In Progress
  • % Done changed from 0 to 40

#25 Updated by Junxiao Shi almost 5 years ago

  • Description updated (diff)

#26 Updated by Davide Pesavento almost 5 years ago

  • Status changed from In Progress to Code review
  • % Done changed from 40 to 60

#27 Updated by Davide Pesavento almost 5 years ago

  • Status changed from Code review to Closed
  • % Done changed from 60 to 100

Also available in: Atom PDF