Figure 4
Namespace and Veth
Namespace and SR-IOV
VM and Veth
VM and SR-IOV
Optimized VM and Veth
Optimized VM and SR-IOV
Steps to reproduce the taken measurements
OS images
These Images were used for the experiments
DuT
Linux machine 6.1.0-17-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.69-1 (2023-12-30) x86_64 GNU/Linux
LoadGen
Linux machine 4.19.0-17-amd64 #1 SMP Debian 4.19.194-3 (2021-07-18) x86_64 GNU/Linux
Timestamper
Linux machine 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
Evaluator
Linux machine 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
VM
Linux machine 6.1.0-17-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.69-1 (2023-12-30) x86_64 GNU/Linux
Setup
VM
nosmt idle=poll intel_idle.max_cstate=0 intel_pstate=disable amd_pstate=disable tsc=reliable mce=ignore_ce audit=0 nmi_watchdog=0 skew_tick=1 nosoftlockup intel_iommu=on iommu=pt
After all nodes are booted up, please execute the following assuming the code respository https://github.com/tumi8/mininet-vm-sriov/tree/main is cloned and the user is in the folder scripts/mininet_scripts:
Generating a Mininet Bundle
cd scripts/mininet_scripts
rm -f ../mininet/mininet.bundle
git -C ../mininet bundle create mininet.bundle HEAD
Now the following files need to be copied to the DuT: the generated mininet.bundle, mininet_experiment.py, prepare_squashfs.sh, and the VM image files.
Furthermore, the timer and evaluator require pairs of SSH keys as ssh_key and ssh_key.pub to be able to copy data between them.
Lastly the timer-loadgen.lua script is needed on the LoadGen.
After this it is possible to install the dependencies on the respective nodes:
Common part for DuT and LoadGen
Common part
#!/bin/bash
set -xe
apt-get -y update
apt-get -y install linux-cpupower
cpupower frequency-set -g performance
Setup
#!/bin/bash
set -xe
DUT_IMAGE=$(pos_get_variable -g DUT_IMAGE)
DEBIAN_FRONTEND=noninteractive apt update
DEBIAN_FRONTEND=noninteractive apt-get -y install net-tools iperf telnet systemd-container squashfs-tools sudo bridge-utils qemu-system-x86
cd ~
git clone mininet.bundle mininet
./prepare_squashfs.sh $DUT_IMAGE/image.squashfs image.squashfs
ln $DUT_IMAGE/vmlinuz vmlinuz
ln $DUT_IMAGE/initrd.img initrd.img
echo -e "[global]\nbreak-system-packages = true" >/etc/pip.conf
cd ~/mininet
util/install.sh -fnv
sysctl net.ipv4.ip_forward=1
sysctl net.ipv4.conf.all.arp_ignore=1
sysctl net.ipv4.conf.all.arp_announce=1
Setup
#!/bin/bash
# install moongen dependencies for newer moongen version
apt-get update
apt-get install meson ninja-build pkg-config python3-pyelftools libssl-dev zstd -y
set -xe
git clone --recursive "https://github.com/WiednerF/MoonGen.git" moongen
cd moongen
./build.sh
./setup-hugetlbfs.sh
Setup
#!/bin/bash
# log every command
set -x
MOONGEN=moongen
cd /root # Make sure that we are in the correct folder
# install moongen dependencies for newer moongen version
apt-get update
apt-get install meson ninja-build pkg-config python3-pyelftools libssl-dev zstd libsystemd-dev -y
git clone --recurse-submodules https://github.com/WiednerF/moongen.git moongen
# Bind interfaces to DPDK
modprobe vfio-pci
for id in $(python3 /root/moongen/libmoon/deps/dpdk/usertools/dpdk-devbind.py --status | grep -v Active | grep -v ConnectX | grep unused=vfio-pci | cut -f 1 -d " ")
do
echo "Binding interface $id to DPDK"
python3 /root/moongen/libmoon/deps/dpdk/usertools/dpdk-devbind.py --bind=vfio-pci $id
i=$(($i+1))
done
cd moongen
./build.sh
./setup-hugetlbfs.sh
cd /root
mkdir -p ~/.ssh
cp ssh_key ~/.ssh/ssh_key
cp ssh_key.pub ~/.ssh/ssh_key.pub
chmod 600 ~/.ssh/ssh_key
chmod 600 ~/.ssh/ssh_key.pub
echo "finished setup, waiting for DUT"
Setup
#!/bin/bash
# exit on error
set -e
# log every command
set -x
LOOP="0"
NC="$(pos_get_variable nc || true)"
if [ "$BUCKET_SIZE" = '' ]; then
BUCKET_SIZE=$DEFAULT_BUCKET_SIZE
fi
# Makes sure that a no setup mode works
if [ -d "$LOOP" ]; then rm -rf $LOOP; fi
if [ $LOOP -eq 0 ]; then
apt update
DEBIAN_FRONTEND=noninteractive apt install -y postgresql
DEBIAN_FRONTEND=noninteractive apt install -y postgresql-client
DEBIAN_FRONTEND=noninteractive apt install -y parallel
DEBIAN_FRONTEND=noninteractive apt install -y python3-pip
DEBIAN_FRONTEND=noninteractive apt install -y texlive-full
DEBIAN_FRONTEND=noninteractive apt install -y lbzip2
DEBIAN_FRONTEND=noninteractive apt install -y rename
DEBIAN_FRONTEND=noninteractive apt install -y zstd
python3 -m pip install pypacker
python3 -m pip install netifaces
python3 -m pip install pylatex
python3 -m pip install matplotlib
python3 -m pip install pandas
python3 -m pip install pyyaml
# required for pandas; default version 2.x no longer compatible with pandas
python3 -m pip install Jinja2==3.1.2
fi
mkdir $LOOP
mkdir $LOOP/results
Experiment
The following global variables are used for the experiment execution:
Global Variables
MEASUREMENT_TIME: 90
SIZE: 84
WARM_UP_TIME: 30
And the following loop variables where POS execute the experiment script once per combination in a cross product:
Loop Variables
rate: [50000, 300000, 500000, 700000, 900000, 1000000]
node: ['namespace', 'vm', 'vm_opt']
link: ["veth", "hwpair"]
burst: [1]
1. Start Mininet in the DuT:
link=LINK # hwpair or veth
node=NODE # vm, vm_opt or namespace
ps -ef | grep "qemu-system-x86_64" | awk '{print $2}' | xargs kill -15 || echo "failed" # clean from previous run
cd ~
ip l set dev [LOADGEN_IN] down
ip l set dev [LOADGEN_OUT] down
echo 1 >/sys/class/net/[LOADGEN_IN]/device/sriov_numvfs
echo 1 >/sys/class/net/[LOADGEN_OUT]/device/sriov_numvfs
sleep 2
if [[ "$node" == "namespace" ]]; then
ip l set dev [LOADGEN_IN] up
ip l set dev [LOADGEN_OUT] up
ip l set dev [LOADGEN_IN] promisc on
ip l set dev [LOADGEN_OUT] promisc on
ip l set dev [LOADGEN_IN] up
ip l set dev [LOADGEN_OUT] up
fi
ip l set dev [LOOP1] down
ip l set dev [LOOP2] down
echo 14 >/sys/class/net/[LOOP1]/device/sriov_numvfs
echo 14 >/sys/class/net/[LOOP2]/device/sriov_numvfs
sleep 1
for i in {0..13}; do
ip l set [LOOP1] vf "$i" spoofchk off trust on vlan $((i + 1)) state enable
ip l set [LOOP2] vf "$i" spoofchk off trust on vlan $((i + 1)) state enable
sleep 1
done
python3 mininet_experiment.py "$node" "$link"
2. Wait for the forwarder to be finished with the start up, then start MoonGen on the LoadGen:
LOADGEN=moongen
/root/$LOADGEN/build/MoonGen /root/$LOADGEN/examples/moonsniff/timer-loadgen.lua -x 64 --fix-packetrate [PACKET_RATE]
--packets [PACKET_RATE*1500] --warm-up 30 --flows 10 --burst [BURST] [PORT_TX] [PORT_RX]
3. After MoonGen on the LoadGen has been started, a few packets are send for warm-up. After those packets, we have a break in the execution of 30 seconds, which should be used to start the packet sniffer on the timestamper to record the measurements:
Timer: Capture Packets using MoonGen
MOONGEN=moongen
TIMEOUT_AMOUNT=90
/root/$MOONGEN/build/MoonGen /root/$MOONGEN/examples/moonsniff/sniffer.lua [PRE_PORT] [POST_PORT] --capture --time $TIMEOUT_AMOUNT --snaplen 84
4. The timestamper stops automatically after 150 seconds and creates two PCAPs, a latencies-pre.pcap and latencies-post.pcap to the respective side of the evaluation
5. Repeat Steps 2 to 4 for each value of the cross product in loop variables and make sure to save the PCAPs at another place, because they will be overwritten otherwise.
Reproduce Figures and Data
Raw PCAP Data
All sampled RAW PCAP-Data are available under https://doi.org/10.14459/2025mp1773238.
Evaluation
Copy the content of scripts/evaluator to the evaluator node, the PCAPS into ~/0/results/ and then execute the following:
Evaluate PCAPs and create Figures
#!/bin/bash
LOOP=0
EXECUTE_TYPE="flow-based"
recreate_postgresql_cluster() {
# delete and re-create the first cluster that pg_lsclusters outputs
read -ra CLUSTER_DATA <<< "$(pg_lsclusters --no-header | head -n1)" # array variable
pg_dropcluster --stop "${CLUSTER_DATA[0]}" "${CLUSTER_DATA[1]}"
pg_createcluster --start "${CLUSTER_DATA[0]}" "${CLUSTER_DATA[1]}"
}
process_pcap() {
PCAP=$1
i=$2
REPEAT_NAME=$3
EXECUTE_TYPE=$4
DB_NAME="root$i-$REPEAT_NAME"
dropdb --if-exists "$DB_NAME"
createdb "$DB_NAME"
export PGDATABASE="$DB_NAME"
~/evaluator/dbscripts/"$EXECUTE_TYPE"/import.sh "$PCAP"
~/evaluator/dbscripts/"$EXECUTE_TYPE"/analysis.sh "$PCAP"
~/evaluator/dbscripts/"$EXECUTE_TYPE"/cleanup.sh
}
execution() {
i=$1
NUM_CORES=8 # More is not possible without having problems
RESULT_DIR="$HOME/$i/results"
# Create and enter results/results directory (required by other scripts)
mkdir --mode=0777 "$RESULT_DIR/results"
pushd "$RESULT_DIR/results"
# Process different pcaps in parallel
export -f process_pcap
parallel -j $NUM_CORES "process_pcap {} $i \{\%\} $EXECUTE_TYPE" ::: ../latencies-pre-*.pcap.zst
popd; pushd "$RESULT_DIR"
cp -r ~/evaluator/plotter/"$EXECUTE_TYPE"/* .
mkdir figures
python3 plotcreator.py figures results .
python3 irqprocessor.py ../irq ./figures
if [ "$EXECUTE_TYPE" = 'flow-based' ]; then
# Generate all necessary data for flow-based analysis
python3 generate_flow_graphs.py "$i"
fi
make -i
pushd results
for k in *.csv; do
zstdmt -13 --rm --no-progress "$k";
done
cd ../
# All Results can be collected at ~/0/results/results and ~/0/results/figures
}
# Delete all PostgreSQL data when the user root already exists (this avoids problems when not resetting the evaluator)
ROOT_EXISTS=$(psql postgres -tXAc "SELECT 1 FROM pg_roles WHERE rolname='root'" || true)
if [ "$ROOT_EXISTS" = "1" ]; then
recreate_postgresql_cluster
fi
# Create user root as the script is running as root
env --chdir /var/lib/postgresql setpriv --init-groups --reuid postgres -- createuser -s root
env --chdir /var/lib/postgresql setpriv --init-groups --reuid postgres -- createdb root
for i in $( seq 0 "$LOOP" )
do
execution "$i" &
done
wait
Precompiled CSV Data and Figures
The precompiled Figures are available for this measurement in the Repository and the precompiled CSVs are available under the following https://doi.org/10.14459/2025mp1773238.