Task #2524
closedInitialize repository
100%
Description
Initialize ndn-tools git repository.
- add README
- create directory structure
- add build scripts
Updated by Junxiao Shi almost 10 years ago
- Blocks Task #2525: Import ndnping and peek/poke added
Updated by Alex Afanasyev almost 10 years ago
Create first initial commit with just .gitignore file. Gerrit jas problems with rewriting of the initial commit.
Updated by Junxiao Shi almost 10 years ago
- Status changed from New to In Progress
Updated by Junxiao Shi almost 10 years ago
- % Done changed from 0 to 10
http://gerrit.named-data.net/1775 initial commit, "the one you'll always remember".
.gitignore
is copied from ndn-cxx.
Updated by Junxiao Shi almost 10 years ago
I can think about a few different directory structures. Please comment which one is better.
one directory per tool with its own wscript¶
common.hpp
core/ (shared)
algorithm.hpp
algorithm.cpp
ping/
ndnping.cpp
ndnpingserver.cpp
wscript
README.md
manpages/
ndnping.rst
ndnpingserver.rst
peek/
ndnpeek.cpp
ndnpoke.cpp
wscript
README.md
manpages/
ndnpeek.rst
ndnpoke.rst
nfdc/
main.cpp
face.hpp
face.cpp
rib.hpp
rib.cpp
status.hpp
status.cpp
wscript
README.md
manpages/
nfdc.rst
wscript
README.md
Note: 'nfdc' will continue to be in NFD repository. Here it illustrates how a program requiring multiple translation units is arranged.
Benefits:
- Programs belonging to the same tool are kept together.
- Each tool has its own README to be displayed on GitHub.
- There's more flexibility to tweak
wscript
for each tool. - Top-level
wscript
can be made to selectively enable/disable a tool, such as--disable-ping
.
Drawbacks:
- Defining what is "closely related" would become tricky.
- Each tool is forced to have its own
wscript
.
one directory for all tools¶
common.hpp
core/ (shared)
algorithm.hpp
algorithm.cpp
tools/
ndnping.cpp
ndnpingserver.cpp
ndnpeek.cpp
ndnpoke.cpp
nfdc/
main.cpp
face.hpp
face.cpp
rib.hpp
rib.cpp
status.hpp
status.cpp
manpages/
ndnping.rst
ndnpingserver.rst
ndnpeek.rst
ndnpoke.rst
nfdc.rst
wscript
README.md
Benefits:
wscript
is uniformed.- manpages are kept in one place.
Drawback:
- Relationship between tools are unclear. Top-level
wscript
can't easily disable a tool. - There's no per-tool GitHub README.
Updated by Alex Afanasyev almost 10 years ago
I like benefits of first approach, but I kind of leaning to organization of the second :). What about a combination of both. Like
common.hpp
core/ (shared)
algorithm.hpp
algorithm.cpp
tools/
ndnping/
README.md
ndnping.cpp
ndnpingserver/
README.md
ndnpingserver.cpp
ndnpeek/
README.md
ndnpeek.cpp
ndnpoke/
README.md
ndnpoke.cpp
nfdc/
wscript (optional)
README.md
main.cpp
face.hpp
face.cpp
rib.hpp
rib.cpp
status.hpp
status.cpp
manpages/
ndnping.rst
ndnpingserver.rst
ndnpeek.rst
ndnpoke.rst
nfdc.rst
wscript
README.md
We can make sure that wscript can be optionally put into tools directory and completely override common function defined in top-level wscript.
Updated by Junxiao Shi almost 10 years ago
A problem in note-6 design is: it cannot preserve the relationship between multiple programs of the same tool.
For example, ndnping
and ndnpingserver
programs belong to the same tool. They should live in the same directory for the following benefits:
- Users know that they are closely related, without referring to additional documents or guessing from directory name.
- One
README.md
covers both programs, which can talk about the protocol. - Some code can be shared between multiple programs belonging to the same tool. It's inappropriate to place those code into
core/
because they could be completely useless for other tools. One example for code shared betweenndnping
andndnpingserver
is the constant that defines the name component appended to prefix ('ping').
Is there any difficulty in building manpages if we take the first design in note-5?
Updated by Alex Afanasyev almost 10 years ago
Junxiao Shi wrote:
A problem in note-5 design is: it cannot preserve the relationship between multiple programs of the same tool.
I guess you mean note-6? As I mentioned, we can make use of optional wscript in specific tool(s) folder. This way, if there are apps that are closely related, they can be put there without any problems. This would almost match first note-5 design minus one of the listed drawbacks.
Is there any difficulty in building manpages if we take the first design in note-5?
There is no difficulty. It is just from users perspective is better to put all documentation in a confined environment (the same folder), instead of scatter it around the directory structure. Probably, the documentation structure should be extended to match one in nfd/ndn-cxx:
docs/
conf.py
index.rst
manpages.rst
manpages/
xxx.rst
yyy.rst
Updated by Junxiao Shi over 9 years ago
So, the structure would be:
common.hpp
core/ (shared)
algorithm.hpp
algorithm.cpp
tools/
ping/
README.md
ndnping.cpp
ndnpingserver.cpp
wscript
peek/
README.md
ndnpeek.cpp
ndnpoke.cpp
wscript
nfdc/
README.md
main.cpp
face.hpp
face.cpp
rib.hpp
rib.cpp
status.hpp
status.cpp
manpages/
ndnping.rst
ndnpingserver.rst
ndnpeek.rst
ndnpoke.rst
nfdc.rst
wscript
README.md
What should top-level wscript
do, when a subdirectory does not have a wscript
?
- build all of
dir/*.cpp
into one program nameddir
(this allowsnfdc/
to omitwscript
) - or, build each of
dir/file.cpp
into a program namedfile
(this allowsping/
andpeek/
to omitwscript
)
Updated by Alex Afanasyev over 9 years ago
What should top-level
wscript
do, when a subdirectory does not have awscript
?
- build all of
dir/*.cpp
into one program nameddir
(this allowsnfdc/
to omitwscript
)
I'll go with this approach.
- or, build each of
dir/file.cpp
into a program namedfile
(this allowsping/
andpeek/
to omitwscript
)
Updated by Junxiao Shi over 9 years ago
Another question is about namespace of each tool, and class name of main class. My proposal is:
Each tool has its own namespace under ndn
. For example:
ndn::ping
ndn::peek
ndn::nfdc
The prefix ndn::
allows tools to use ndn-cxx types such as ndn::Interest
without writing using
or using namespace
directives.
Main class of each program could be either Main
, or match its binary name. For example,
ndn::ping::Ndnping
ndn::ping::Ndnpingserver
ndn::peek::Ndnpeek
ndn::peek::Ndnpoke
ndn::nfdc::Main
The main class must declare a static method that has same signature as the main function.
The main function should call this static method, such as:
int
main(int argc, char** argv)
{
return ndn::ping::Ndnping::main(argc, argv);
}
Updated by Alex Afanasyev over 9 years ago
I don't have objections. Only small clarifications: we still going to use camelcase for class names (NdnPing, NdnPingServer, NdnPeek, NdnPoke). File names (=actual application name) should generally follow our convention too (ndn-ping, ndn-ping-server, etc.), but is not required to do so if command names have been defined in the other way historically.
Updated by Junxiao Shi over 9 years ago
I also prefer NdnPing NdnPingServer NdnPeek NdnPoke
.
File names would be ndn-ping.cpp ndn-ping-server.cpp ndn-peek.cpp ndn-poke.cpp
according to rule 2.1.
UNIX convention is to not use dashes or other symbols in binary names, so compiled programs should be named ndnping ndnpingserver ndnpeek ndnpoke
.
This difference isn't a concern because correlation between ndn-ping.cpp
and ndnping
is expected to be easily understandable by an average developer.
Yes ndn-tlv-peek ndn-tlv-poke
will be renamed as ndnpeek ndnpoke
.
This conflicts with ndnx package, but it isn't deployed anymore.
Updated by Alex Afanasyev over 9 years ago
I don't mind renaming ndn-tlv-peek/poke, as it is confusing for me. We should simply keep aliases for a version or two, to ensure more smooth transition for users.
Updated by Junxiao Shi over 9 years ago
The next question is dependency.
Most tools only depend on Boost and ndn-cxx.
Suppose I want to import ndndump
which has an additional dependency on libpcap-dev. There would be two problems:
wscript
needs to detect libpcap-dev. And, since each tool can be individually turned off, such detection shall be skipped when--without-ndndump
is specified.-lpcap
linker flag is needed for ndndump, but is unnecessary for other tools.
Any ideas on how to achieve these?
Also, suppose --without-ndndump
is not specified, but libpcap-dev is not found, shall the configure step continue with a warning (like ns-3), or abort with an error?
Updated by Alex Afanasyev over 9 years ago
The simplest way would be just add global dependency for everything. The linker will (should) take care of linking to the libraries that are actually needed. We can make these dependencies optional and do not compile ndndump when dependency is not there, though I don't like this way and I suspect you also wouldn't like it.
A little more complex but more flexible is to add custom wscripts into tools that require additional dependencies. With --without-<tool>
it would simply ignore the whole tool.
Updated by Junxiao Shi over 9 years ago
- % Done changed from 10 to 40
http://gerrit.named-data.net/1848
I scratched the wscript
for a few dummy programs, including:
- "nfdc": no custom wscript, single executable
- "ping": custom wscript
- "dump": custom wscript, extra dependency
--without-X
isn't implemented yet.
Please review.
Updated by Junxiao Shi over 9 years ago
As discussed on Gerrit, wscript
is now mandatory for each tool.
Next commits in this issue (no particular order):
- add documentation
- README.md introduces what is ndn-tools
- INSTALL.md describes how to build and install ndn-tools
- README-dev.md explains the structure of codebase, and how to develop a new tool
- implement
--without-X
option inwscript
- add Travis and Jenkins script
Some of these need to come after at least one actual tool is imported.
Updated by Junxiao Shi over 9 years ago
- % Done changed from 40 to 60
http://gerrit.named-data.net/1892 enables TravisCI and Jenkins.
Updated by Junxiao Shi over 9 years ago
Due to disputes about how dependency installation should work on TravisCI, http://gerrit.named-data.net/1892 now enables Jenkins only.
Once this is merged, code importing tasks can progress.
TravisCI will be enabled in a separate Task.
Updated by Junxiao Shi over 9 years ago
- Status changed from In Progress to Closed
- % Done changed from 60 to 100
Updated by Junxiao Shi over 9 years ago
The dependency installation script, .jenkins.d/00-deps.sh, is an improvement from 00-deps.sh in ChronoChat.
- Installation steps execute not only on Jenkins, but also on TravisCI.
- benefit: dependency list is kept in one place, instead of having duplicates in both .travis.yml and 00-deps.sh
- drawback: dependency installation is in Travis 'script' step, instead of 'install' step
"set -x" is moved to just before "brew" or "apt-get" invocation.
Installation commands are still printed, but the details in 'has' function are hidden.'has' function execution can produce a log like:
+ has OSX Boost1.48 Linux Ubuntu Ubuntu-12.04 Ubuntu-12.04-64bit Ubuntu-12.04-64bit-ua-maestro-31805 code-coverage + local p=OSX + shift + local x + for x in '"$@"' + [[ Boost1.48 == \O\S\X ]] + for x in '"$@"' + [[ Linux == \O\S\X ]] + for x in '"$@"' + [[ Ubuntu == \O\S\X ]] + for x in '"$@"' + [[ Ubuntu-12.04 == \O\S\X ]] + for x in '"$@"' + [[ Ubuntu-12.04-64bit == \O\S\X ]] + for x in '"$@"' + [[ Ubuntu-12.04-64bit-ua-maestro-31805 == \O\S\X ]] + for x in '"$@"' + [[ code-coverage == \O\S\X ]] + return 1
This kind of detail is unhelpful and creates noise.
It's sufficient to showapt-get
orbrew
command lines only for understanding what's happening.All packages are installed in the same "apt-get install" invocation.
This makes execution slightly faster.
Updated by Davide Pesavento over 9 years ago
Junxiao Shi wrote:
This kind of detail is unhelpful and creates noise.
This is not noise. set -x
is intended to be used for debugging, therefore seeing exactly how the shell is interpreting the commands that it's executing is extremely helpful. If you don't want this then you're misusing set -x
.
It's sufficient to show `apt-get` or `brew` command lines only for understanding what's happening.
Why do you need to see the command lines? They're practically static, there's no logic involved in executing those commands. So seeing the command line doesn't help in understanding what went wrong if something goes wrong. Tracing the execution during the previous if/then constructs is much more helpful.
If you just want some "pretty" output, set -x
is not the right tool. Write a function echo_and_run()
that echoes $@
just before running it.
Updated by Junxiao Shi over 9 years ago
Reply to note-23:
I dislike set -x
in general and prefer to explicitly echo
important commands, but Alex insists on using it.