This is a script that extends shadow experiments generated by tornettools for experimenting on pluggable transports.
The ptnettools.py script is meant to be run on the shadow.config.yaml files generated by tornettools.
It should be run on each generated experiment directory, between the tornettools generate
, and tornettools simulate
steps.
At the moment, this script supports the following transports: obfs4, snowflake, webtunnel
Follow all the set up instructions for tornettools
before proceeding with these scripts. Make sure to read about system configurations and limits for Shadow before starting large experiments.
See Once is Never Enough for a full analysis and guidelines on how to run scientifically sound Tor experiments in Shadow. For these purposes, a 10% network sampled 10 times for each test case is a good starting point.
You can just call tornettools generate
10 times and then reuse those 10 samples for each of the test cases being compared:
for i in `seq 0 9`; do
tornettools generate \
relayinfo_staging_2023-04-01--2023-04-30.json \
userinfo_staging_2023-04-01--2023-04-30.json \
networkinfo_staging.gml \
tmodel-ccs2018.github.io \
--network_scale 0.1 \
--prefix tornet-0.1-$i
done
mkdir case1
mkdir case2
for i in `seq 0 9`; do
cp -r tornet-0.1-$i case1/
cp -r tornet-0.1-$i case2/
done
The resulting directory structure will look something like this:
experiments/
|__ case1/
|__ tornet-0.1-0/
|__ tornet-0.1-2/
:
|__ tornet-0.1-9/
|__ case2/
|__ tornet-0.1-0/
|__ tornet-0.1-2/
:
|__ tornet-0.1-9/
Then, run ptnettools (with the relevant transport options) on each of the experiment directories.
For an obfs4 experiment in directory tornet-0.1
, and the obfs4 binary path of /usr/local/bin/lyrebird
, run:
./ptnettools.py --path tornet-0.1 --transport obfs4 --transport-bin-path /usr/local/bin/lyrebird
Snowflake has multiple binaries, each of which should be compiled and installed in a single directory to be passed into ptnettools. This directory should contain each of:
- client
- broker
- proxy
- server
- probetest
The server binary will need this patch to run in Shadow. See Issue #3278 for more details.
In addition, you will need a STUN server binary installed in the same directory:
GOBIN=[BINARY_DIR_PATH] install github.com/gortc/stund@latest
For a snowflake experiment in directory tornet-0.1
and the above binaries installed in ~/.local/bin/snowflake/
, run:
./ptnettools.py --path tornet-0.1 --transport snowflake --transport-bin-path ~/.local/bin/snowflake
Webtunnel has a client and server binary, each of which should be compiled and installed in a single directory to be passed into ptnettools. This directory should contain each of:
- client
- server
For a webtunnel experiment in directory tornet-0.1
and the above binaries installed in ~/.local/bin/webtunnel/
, run:
./ptnettools.py --path tornet-0.1 --transport webtunnel --transport-bin-path ~/.local/bin/webtunnel
Note: these Shadow experiments do not have the fully nginx reverse proxy set up. Clients instead make a direct connection to the Tor bridge. This shouldn't introduce measurable network effects, but should be modified if you want to model a webtunnel bridge that also receives a significant amount of non-circumvention traffic.
There are some facilities to enable great bottleneck experiments:
- You can use the
update-model.py
script to write a new network graph with higher packet loss rates on the network nodes that correspond to CN cities. - You can use the
ptnettools.py --china-perf-frac=F
option to reassign fraction F of the perfclients to run inside the CN network nodes.
These steps would be run prior to simulation.
After the experiments have been generated and modified with ptnettools, continue with the tornettools simulate
utility. You'll need to pass in Shadow arguments directly, since Go binaries require the --model-unblocked-syscall-latency=true
option.
If you have more than one NUMA nodes, you can run multiple experiments in parallel, but will need to set the --parallelism
argument to the number of cores per socket. For example, on a machine with 2 sockets and 8 cores per socket, on each node run:
for i in `seq 0 9`; do
numactl --cpunodebind=$NODE_ID --membind=$NODE_ID tornettools simulate -a '--parallelism=8 --seed=666 --template-directory=shadow.data.template --model-unblocked-syscall-latency=true' case$NODE_ID/tornet-0.1-$i
done
- right now we're pointing all perf clients through the bridge, but maybe we don't want that kind of bottleneck