How to Run Shadow-Tor on RIPPLE

As of this writing, Shadow/Shadow-Tor doesn't quite work on RIPPLE if one uses the default instructions. Just in case someone else wants to use Shadow for some experiments, here's a quick guide to save you some work in figuring these things out.

First thing I would recommend is following the official guides for Shadow and Shadow-Tor on your lab machine, to get Shadow and Shadow-Tor working on your local setup. If it doesn't work, then it almost certainly won't work on Ripple either, so you should figure that out first. The documentation can fall out of date at times, so you may want to look through recent commits to see if the problem is in there somewhere (note that the docs tell you to use the release branch instead of the main branch, but the main branch sometimes has fixes for known bugs in it that release doesn't). If you can't figure out what's wrong, file a bug report. Rob will typically respond relatively quickly.

Once you have that working, or if you just feel like skipping it, log into the RIPPLE machine you'll be using. See the main page on Computing Resources for more information on that.

Before we get started installing, just so we don't forget, there are a few things we should check: The maximum number of files you as a user can have open, the maximum number of files the system can have open, and the maximum number of maps you can have.


ulimit -n

to see how many files you can have open in a process. If the number is less than, say, 10000 or so, you almost certainly want to contact Lori to ask them to significantly increase that number (i.e., add the respective hard and soft limits in /etc/security/limits.conf for your username). I have yet to run into issues with a limit of 1000000 (though with a sufficiently large network, you could).

In addition, you should check the maximum number of files the system can have open by running

cat /proc/sys/fs/file-max

It is extremely important that this number be significantly larger (say, ~2x) than the per-process user limit, since if this limit is reached, most commands simply won't run (turns out almost everything on Unix requires a file descriptor, go figure). This means you (or anyone, even root) won't be able to ssh in, and if you are lucky enough to have an open session, you won't be able to run ls or ps. Thankfully, kill still works, so as a last ditch effort, either kill the process if you have the pid, or kill all of your processes by running kill -1. But really, just avoid the problem entirely by ensuring the limit is sufficiently large. If it's not, ask Lori to run

sysctl -w fs.file-max=# way bigger than anything in limits.conf

Finally, it's also likely with large experiments that you'll hit the max number of memory mappings per-process. Run

sysctl vm.max_map_count

to see what the current limit is. Keep in mind that there's going to be at least one map per open file descriptor for this process, probably more, so it should be fairly large. As usual, root can set it via

sysctl -w vm.max_map_count=number

Now that that's out of the way, let's install shadow.


Everything is easier to setup assuming it's all in your home directory, so that's where I'll assume things are going.

Clone the release branch from the Shadow repo by running:

git clone -b release

Or, use the master branch if you need that.

Before we build Shadow, we have to locally build glib, because of this bug. That bug is basically a "wontfix" for a broken glib, so unless "lsb_release -a" says something more recent than Ubuntu 14.04, run

mkdir ~/.shadow
cd ~/.shadow
tar xaf glib-2.42.1.tar.xz
cd glib-2.42.1
./configure --prefix=/home/${USER}/.shadow
make -j 8
make install

Similarly, the version of igraph on Ubuntu 14.04 is too old now (see this bug), so we'll have to build that as well if the machine you're running on is still on that.

git clone
cd igraph
git checkout release-0.7
./configure --prefix=/home/${USER}/.shadow
make install

Finally, it's likely the machine you're on doesn't have pyelftools installed, but rather than install it globally with apt, you can just install the python library to your account with pip.

pip install --user pyelftools

Now we can actually build Shadow.

cd ~/shadow
./setup build -ct
./setup install
./setup test

If you get any compile errors because of missing libraries, you'll have to contact Lori and ask them to install what is missing (the most up-to-date list can be found on the Shadow wiki).

Then, either manually add the .shadow/bin directory to your PATH in .bashrc, or just run

echo "export PATH=${PATH}:/home/${USER}/.shadow/bin" >> ~/.bashrc && bash

At this point, you should run

shadow --version

to check that it successfully installed and the version is what you expect.


TGen is a program used to generate application-level traffic. Originally it was designed for Shadow and shipped as part of it, but it's now used for other projects as well, and so has to be built separately. You can find the main build instructions on the github repo, though once again, they won't quite work for us as written. Thankfully the dependencies are a subset of Shadow's, but those dependencies include glib and igraph so make sure you successfully build and installed Shadow before moving on to tgen.

git clone
mkdir tgen/build && cd tgen/build
cmake .. -DCMAKE_INSTALL_PREFIX=/home/$USER/.shadow
make && make install

If you get an error about

too many arguments to function ?igraph_write_graph_graphml?

then you need to replace that line in the file.

sed -i 's/igraph_write_graph_graphml(mmodel->graph, graphStream, FALSE);/igraph_write_graph_graphml(mmodel->graph, graphStream);/' ~/tgen/src/tgen-markovmodel.c


Now you can install the Shadow-Tor plugin. For this, you can just follow the directions on Shadow-Tor's wiki. In a nutshell, the commands are

git clone -b release
cd ~/shadow-plugin-tor
./setup dependencies -y
./setup build
./setup install

Once it's installed, you need to setup a network to run it on. This is also covered in the official documentation, at the end.

Running the experiment

To run the experiment, just go to the directory that it was set up in and run

shadow -w 16 shadow.config.xml > shadow.log

where 16 is the thread count. Shadow doesn't scale linearly with thread count, and will actually perform slightly worse if the thread count is too high. The best thread count varies greatly depending on the workload, with larger workloads scaling to more threads. Obviously, never run more threads than the CPU supports (run "lscpu | grep ^CPU\(s\):" to check). Scaling beyond the number of physical cores (e.g. to take advantage of hyperthreading) is unlikely to be useful either, so you'll likely want to divide that number by 2. Otherwise, the performance hit of too many threads is nowhere near the hit of not enough, so bias towards a larger thread count. For an idea of how long an experiment should take and how Shadow scales, you can see this paper (though it is increasingly out of date, with Shadow development pulling between performance enhancements to make it take less time, and accuracy improvements that make it take more).

Once the experiment is complete, you'll need to parse and plot the results. Again, the most up-to-date source for how to do this is on the respective wiki pages for Shadow (network statistics), TGen (application statistics), and Shadow-Tor (Tor statistics).

Edit | Attach | Watch | Print version | History: r8 < r7 < r6 < r5 < r4 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r8 - 2019-12-20 - JustinTracey
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback