Feature #3571
closedFIB+PIT benchmark
Description
Benchmark the performance of FIB and PIT.
Write a program that instantiates a NameTree
in isolation (no forwarding or other modules), and create a FIB and a PIT upon this NameTree
.
In this program, execute a loop that invokes a series of operations on FIB and PIT, and measure the total execution duration.
The series of operations should capture the reality of forwarding.
Purpose:
- This benchmark program can be executed on different hardware to understand the current performance of FIB and PIT.
- This benchmark program can be executed under a profiling system to find out the bottleneck of a FIB and PIT implementation. Note: Profiling is not part of this issue.
- A proposed performance improvement of FIB and PIT can be quantitatively measured by executing this benchmark program with and without the proposed improvement.
Updated by Junxiao Shi over 8 years ago
- Assignee set to Yumin Xia
ContentStore has a benchmark program similar to the FIB+PIT benchmark to be created in this issue.
It's recommended to follow the same structure, and place the program in NFD/tests/other
directory.
There are several different series of FIB and PIT operations used in forwarding.
We should start with the most common case: a consumer expresses an Interest that is neither aggregated nor satisfied by ContentStore; the Interest is forwarded to a producer, and brings back a Data.
This translates to the following series of operations:
- (incoming Interest pipeline) insert a new PIT entry, using Interest as argument
- (ContentStore miss pipeline) FIB longest prefix lookup, using PIT entry as argument, returning an entry with one nexthop
- (incoming Data pipeline) query PIT, using Data as argument, returning a PIT entry
- (Interest finalize pipeline) delete the PIT entry, using PIT entry as argument (no lookup)
It's important to capture the asynchronous nature of forwarding.
The operations above should not be executed within the same loop body.
Instead, step 1 and 2 are executed together, step 3 is executed gap3
loops later, and step 4 is executed gap4
loops later.
In other words, if step 1 and 2 are executed when i==1000
, step 3 should be executed when i==1000+gap3
, and step 4 should be executed when i==1000+gap3+gap4
.
The two parameters, gap3
and gap4
, determine the size of PIT.
The length of names, measured in number of components, is also significant, because they can affect the number of loops within the NameTree
longest prefix match procedure.
There are three parameters here: length of FIB prefixes, length of Interest names, and length of Data names.
In the series of operations above, these three parameters should be in an increasing order.
These and other necessary parameters can be left as compile-time constants in the code.
User of the benchmark program is expected to modify the code with desirable parameters.
Updated by Yumin Xia over 8 years ago
Junxiao Shi wrote:
We should start with the most common case: a consumer expresses an Interest that is neither aggregated nor satisfied by ContentStore; the Interest is forwarded to a producer, and brings back a Data.
Dose 'Interest is not aggregated' mean that there is no Interest sent out to producer before with a same name as this one(and not time out yet)?
Junxiao Shi wrote:
step 4 is executed gap4 loops later
Does gap4 mean the length of PIT straggler timer? Why not delete the PIT entry if interest is satisfied?There are three parameters here: length of FIB prefixes, length of Interest names, and length of Data names.
In the series of operations above, these three parameters should be in an increasing order.
Dose 'increasing order' mean that to generate those strings like this?
- FIB prefix: '/ndn/test'
- Interest Name: '/ndn/test/namefoo'
- Data name: '/ndn/test/namefoo/123123'
Junxiao Shi wrote:
This benchmark program can be executed under a profiling system.
Is profiling under boost.test possible?
Updated by Junxiao Shi over 8 years ago
Answer to note-2:
- Yes. Simply speaking, every name is unique.
- Yes.
- No.
gprof
does profiling. It can profiling any C++ program, including a program linked with Boost.Test library.
Updated by Yumin Xia over 8 years ago
I just pushed a draft patchset, wait for review.
Updated by Junxiao Shi over 8 years ago
- Status changed from In Progress to Resolved
- % Done changed from 0 to 100
Code is merged in http://gerrit.named-data.net/2799.
I'll run the benchmark on a few machines before closing this issue.
Updated by Junxiao Shi over 8 years ago
I have executed the benchmark on a Ubuntu 14.04 virtual machine with Intel Xeon CPU E5-2680 0 @ 2.70GHz.
vagrant@m0213:~/NFD$ time build/pit-fib-benchmark
Running 1 test case...
*** No errors detected
real 0m25.139s
user 0m21.752s
sys 0m2.898s
The benchmark simulates FIB and PIT operations on one million Interest-Data exchanges.
This means FIB and PIT alone can achieve 39779 Interest-Data exchanges per second.
This is less than the performance goal of 100Kpps, and a profiling is needed to understand the bottleneck.
Updated by Junxiao Shi over 8 years ago
- Blocks Task #3610: FIB+PIT profiling added
Updated by Junxiao Shi over 8 years ago
- Status changed from Closed to Feedback
I notice a mistake in the benchmark code which would hit an assertion error if executed in debug mode, and causes inaccurate FIB prefix length otherwise.
https://gerrit.named-data.net/3021 corrects this mistake.