Project

General

Profile

Actions

Feature #4279

closed

Self-learning strategy

Added by Junxiao Shi over 6 years ago. Updated about 5 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
Forwarding
Target version:
Start date:
09/27/2017
Due date:
% Done:

100%

Estimated time:
18.00 h (Total: 24.00 h)

Description

Implement NDN self-learning as a forwarding strategy.


Subtasks 4 (0 open4 closed)

Feature #4280: Prefix announcement for self-learningClosedTeng Liang

Actions
Task #4305: Self-learning forwarding strategy: issues and design choicesClosedTeng Liang09/27/2017

Actions
Feature #4355: NDNLPv2: Discovery/NonDiscovery InterestClosedTeng Liang10/23/2017

Actions
Task #4401: Unit tests for Self-learning Forwarding StrategyAbandoned

Actions

Related issues 3 (1 open2 closed)

Blocked by NFD - Feature #4290: Give strategy authority over DataClosedTeng Liang

Actions
Blocked by NFD - Feature #4683: add RIB entry update with prefix announcement in self-learningClosedJunxiao Shi07/24/2018

Actions
Blocks NFD - Feature #4281: Develop self-learning for broadcast and ad hoc wireless facesNew

Actions
Actions #1

Updated by Junxiao Shi over 6 years ago

  • Tracker changed from Task to Feature
  • Estimated time changed from 6.00 h to 18.00 h

My publication On Broadcast-based Self-Learning in Named Data Networking is the foundation of this implementation.
The experiment code used in this publication is available at https://bitbucket.org/yoursunny/sl-exp repository. This repository is provided for reproducibility purpose, and the code may not represent the best engineering practices.

Actions #2

Updated by Junxiao Shi over 6 years ago

  • Blocked by Feature #4280: Prefix announcement for self-learning added
Actions #3

Updated by Junxiao Shi over 6 years ago

  • Blocked by Feature #4281: Develop self-learning for broadcast and ad hoc wireless faces added
Actions #4

Updated by Junxiao Shi over 6 years ago

Actions #5

Updated by Junxiao Shi over 6 years ago

  • Blocked by Feature #4283: Refactor Ethernet unicast communication added
Actions #6

Updated by Junxiao Shi over 6 years ago

  • Blocked by Feature #4355: NDNLPv2: Discovery/NonDiscovery Interest added
Actions #7

Updated by Davide Pesavento over 6 years ago

  • Blocked by Feature #4290: Give strategy authority over Data added
Actions #8

Updated by Davide Pesavento over 6 years ago

  • Related to Task #4305: Self-learning forwarding strategy: issues and design choices added
Actions #9

Updated by Teng Liang over 6 years ago

  • Assignee changed from Yanbiao Li to Teng Liang
Actions #10

Updated by Teng Liang over 6 years ago

Actions #11

Updated by Teng Liang over 6 years ago

  • Blocked by deleted (Feature #4283: Refactor Ethernet unicast communication)
Actions #12

Updated by Teng Liang almost 6 years ago

One big part of self-learning is to verify Prefix Announcement attached to Data, and update FIB through NFD-RIB. The design is to have a separate thread in forwarding strategy, say "SL-RIB", and the strategy can dispatch the work to it with parameters, so that SL-RIB can verify PA and create command Interest and send it to NFD-RIB to update FIB.

Any comments on the design? The uncertain part to me is how to create a separate thread and how to communicate with NFD-RIB in details.

Actions #13

Updated by Davide Pesavento almost 6 years ago

Remember that strategies are supposed to be stateless. Moreover, there can be multiple instances of the same strategy at any given time. Are you going to create a thread for each instance?

Remind me why the thread doing this work cannot be the RIB thread itself..?

Actions #14

Updated by Teng Liang almost 6 years ago

Davide Pesavento wrote:

Remember that strategies are supposed to be stateless. Moreover, there can be multiple instances of the same strategy at any given time. Are you going to create a thread for each instance?

The thread will verify PA and communicate with RIB to update FIB, and each instance needs one thread to handle this, right? Logically, when self-learning strategy wants to deal with PA, it invokes a thread to handle it. No matter the thread updates FIB successfully or fails, the thread can manage itself correctly, such as being terminated correctly. I am not sure the details yet. Does it sound feasible?

Remind me why the thread doing this work cannot be the RIB thread itself..?

One argument is to keep the RIB thread clean and simple, whose job is only to manage FIB. Other parties like NLSR and self-learning should use RIB only to update FIB. For example, NLSR do announcement verification and route calculation itself.

Actions #15

Updated by Davide Pesavento almost 6 years ago

Teng Liang wrote:

The thread will verify PA and communicate with RIB to update FIB, and each instance needs one thread to handle this, right? Logically, when self-learning strategy wants to deal with PA, it invokes a thread to handle it. No matter the thread updates FIB successfully or fails, the thread can manage itself correctly, such as being terminated correctly.

https://twitter.com/davidlohr/status/288786300067270656

I am not sure the details yet. Does it sound feasible?

Of course it's feasible, but you're assuming that using threads is simpler than not using them. And you're also ignoring the performance cost.

Actions #16

Updated by Teng Liang almost 6 years ago

Davide Pesavento wrote:

Teng Liang wrote:

The thread will verify PA and communicate with RIB to update FIB, and each instance needs one thread to handle this, right? Logically, when self-learning strategy wants to deal with PA, it invokes a thread to handle it. No matter the thread updates FIB successfully or fails, the thread can manage itself correctly, such as being terminated correctly.

https://twitter.com/davidlohr/status/288786300067270656

I am not sure the details yet. Does it sound feasible?

Of course it's feasible, but you're assuming that using threads is simpler than not using them. And you're also ignoring the performance cost.

I am assuming we have to use threads. It would be great if we can avoid using threads. We don't want PA verification and RIB update to block forwarding, so we want to put these work on a different thread. I don't know if using a thread would have big performance cost or not. The more interesting question is that can we avoid using threads to achieve the goal (to verify PA and update RIB without blocking forwarding)? For the first version, we can ignore PA verification.

Actions #17

Updated by Teng Liang almost 6 years ago

As discussed in today's NFD call, the async work can run on the RIB thread using its io-service. Based on some reading, here is what in my mind: in the strategy, the self-learning calls RIB io-service, like this ribIo->post( boost::bind(&SLRib::PAhandler, &SLRib_instance, PA)). In function SLRib::PAhandler, it verifies PA and invokes rib::beginApplyUpdate to update FIB. In NFD/daemon/main.cpp, ribIo is initialized, how can self-learning access it, making it global? SLRib is a class implemented in NFD/daemon/fw/Self-learning-rib.cpp. Any comments?

Actions #18

Updated by Davide Pesavento almost 6 years ago

  • Status changed from New to In Progress
Actions #19

Updated by Junxiao Shi over 5 years ago

There are complaints about library bug in 4695,21, but it doesn't seem like so. I have a minimal example that shows Boost.Asio works as expected:

// g++ -std=c++14 -o x x.cpp $(pkg-config --cflags --libs libndn-cxx)

#include <condition_variable>
#include <cstdio>

#include <boost/asio.hpp>
#include <boost/thread.hpp>

static boost::thread_specific_ptr<boost::asio::io_service> g_io;
static boost::asio::io_service* g_ribIo;

boost::asio::io_service&
getGlobalIo()
{
  if (g_io.get() == nullptr) {
    g_io.reset(new boost::asio::io_service());
  }
  return *g_io;
}

int
main()
{
  std::mutex m;
  std::condition_variable cv;

  boost::thread ribThread([&] {
    {
      std::lock_guard<std::mutex> lock(m);
      g_ribIo = &getGlobalIo();
    }
    cv.notify_all();

    boost::asio::io_service::work work(*g_ribIo);
    g_ribIo->run();
  });

  {
    std::unique_lock<std::mutex> lock(m);
    cv.wait(lock, [&] { return g_ribIo != nullptr; });
  }

  g_ribIo->post([&] {
    printf("inside global io = %p\n", &getGlobalIo());
    printf("inside rib io = %p\n", g_ribIo);
  });

  printf("outside global io = %p\n", &getGlobalIo());
  printf("outside rib io = %p\n", g_ribIo);

  return 0;
}
$ ./x
outside global io = 0x1690090
outside rib io = 0x7f6fb80008c0
inside global io = 0x7f6fb80008c0
inside rib io = 0x7f6fb80008c0
Actions #20

Updated by Teng Liang over 5 years ago

  • Blocked by Feature #4683: add RIB entry update with prefix announcement in self-learning added
Actions #21

Updated by Junxiao Shi over 5 years ago

  • Tags set to SelfLearning
Actions #22

Updated by Teng Liang over 5 years ago

The current strategy renews routes on each Data reception. An alternative design is to renew the route on every thousand Data packets reception (or every 5 mins if a Data packet is received). Either way needs to keep a counter or timer on each FIB entry, and the information belongs to strategyInfo. Therefore, this design is to make the FIB entry derived from StrategyInfoHost.

Actions #23

Updated by Junxiao Shi over 5 years ago

this design is to make the FIB entry derived from StrategyInfoHost.

Unnecessary. Just add a Measurements entry with same name as the FIB entry. ASF strategy is already doing that.

Actions #24

Updated by Teng Liang over 5 years ago

Self-learning code is updated with patch sets 23-27, according to the design doc. The code has been tested on NFDs using ndn-tools.

Regarding unit-test, the unit test for forwarding strategy uses the topology tester, which creates forwarder instances and links if needed. However, self-learning requires rib io_service existing on the rib thread along with the forwarder, which is not supported and is complex to implement. Therefore, Alex and I discussed that we can merge the code without unit-test for now. Nevertheless, the forwarding strategy has been tested through integration test.

Actions #25

Updated by Alex Afanasyev about 5 years ago

A point of reference. If I have a multicast face and unicast face, the strategy will prefer the multicast face because its ID smaller and the second interest is NACKed

Example from logs:

Interest=/sl/test/ping/14835690534392738936?ndn.MaxSuffixComponents=1&ndn.MustBeFresh=true&ndn.Nonce=2728691870 from=265 to=259
1547158826.951094 DEBUG: [nfd.SelfLearningStrategy] broadcast discovery Interest=/sl/test/ping/14835690534392738936?ndn.MaxSuffixComponents=1&ndn.MustBeFresh=true&ndn.Nonce=2728691870 from=265 to=263
1547158826.953448 DEBUG: [nfd.SelfLearningStrategy] Nack for /sl/test/ping/14835690534392738936?ndn.MaxSuffixComponents=1&ndn.MustBeFresh=true&ndn.Nonce=2728691870 from=263: Duplicate

Maybe we can "broadcast" in reverse order of faces? This would prioritize anything that was recently created. Of course, no guarantees and not a long term solution, but potentially preferring unicast versus multicast.

Actions #26

Updated by Teng Liang about 5 years ago

Since the code of self-learning forwarding strategy version 1 is merged, should this issue and related ones be closed?

Actions #27

Updated by Junxiao Shi about 5 years ago

  • Blocks Feature #4281: Develop self-learning for broadcast and ad hoc wireless faces added
Actions #28

Updated by Junxiao Shi about 5 years ago

  • Status changed from In Progress to Closed
Actions

Also available in: Atom PDF