Bug #1475
closed
UDP multicast sends on wrong NIC
Added by Junxiao Shi over 10 years ago.
Updated over 10 years ago.
Description
UDP multicast face for an NIC should send packets on the chosen NIC only, instead of onto all NICs.
Topology
/----\
A B
\----/
- Each host has two NICs.
- A/eth0 and B/eth0 are on one VLAN.
- A/eth1 and B/eth1 are on another VLAN.
Steps to reproduce:
- Start NFD on A and B.
- Execute
nfd-status -f
on A, find out the FaceId of UDP multicast faces.
- Invoke
nfdc add-nexthop /Z $faceid
on A, where $faceid is the FaceId for the UDP multicast face on eth1.
- Start
ndnpingserver /Z
on B.
- Execute
ndnping -c 30 /Z
on A.
- Invoke
nfd-status -f
on A and B.
Expected:
- On host A, UDP multicast face on eth0 has 0 incoming Interests, 0 incoming Datas, 0 outgoing Interests, 0 outgoing Datas.
- On host A, UDP multicast face on eth1 has 0 incoming Interests, 30 incoming Datas, 30 outgoing Interests, 0 outgoing Datas.
- On host B, UDP multicast face on eth0 has 0 incoming Interests, 0 incoming Datas, 0 outgoing Interests, 0 outgoing Datas.
- On host B, UDP multicast face on eth1 has 30 incoming Interests, 0 incoming Datas, 0 outgoing Interests, 30 outgoing Datas.
Actual:
- On host A, UDP multicast face on eth0 has 0 incoming Interests, 30 incoming Datas, 0 outgoing Interests, 0 outgoing Datas.
- On host A, UDP multicast face on eth1 has 0 incoming Interests, 30 incoming Datas, 30 outgoing Interests, 0 outgoing Datas.
- On host B, UDP multicast face on eth0 has 30 incoming Interests, 0 incoming Datas, 0 outgoing Interests, 0 outgoing Datas.
- On host B, UDP multicast face on eth1 has 30 incoming Interests, 0 incoming Datas, 0 outgoing Interests, 30 outgoing Datas.
From what I can see, it seems to be a problem on the receiver-side (A) when receiving Data packets. What do the Interests counters show? Does B receive Interests from both NICs?
- Description updated (diff)
I executed the steps again, and I can confirm the problem exists for both Interests and Datas.
I don't know whether the problem is on sender or on receiver.
Both runs are executed on ONL. eth1 refers to the experiment NIC, eth0 refers to the control NIC.
It seems that the join_group option doesn't work as I expected: even if the local endpoint (the IP address of the network interface) is specified, the socket is able to receive all the incoming packets for that specific multicast group, no matter from which interface they arrived. The join seems just to enable a specific device to receive multicast packet, but then the kernel dispatches the packet to all the socket that joined the group(this is my guess, I haven't found any documentation that completely clarifies this point).
So far, the only solution that I've found is to use the SO_BINDTODEVICE option. This solve the issue, but the problem is that works only on Linux and it requires the name of the interface.
I'll keep looking for a cross-platform solution
I didn't find any better solution than the SO_BINDTODEVICE
option. But I found out that this problem seems to occur only on linux system. I tried on mac os and the multicast face works properly. Therefore I will simply add the SO_BINDTODEVICE
option inside an #if defined(__linux__)
. But to do that I need the interface name. Since the face manager should already have this information, it could pass the name of the interface to the udp-factory when it's creating the multicast face
- Tracker changed from Task to Bug
- Status changed from New to Code review
- % Done changed from 0 to 100
- Status changed from Code review to Closed
Also available in: Atom
PDF