Figure 4
Handshake latency
Key Agreements (top) and Signature Algorithms (bottom) ranked depending on the logarithmic handshake latency we measured. Top and Bottom is ranked separately with the algorithms on the left being the fastest.
Data transmission volume
Key Agreements (top) and Signature Algorithms (bottom) ranked depending on the data transmission volume observed for the respective algorithm. Top and Bottom is ranked separately with the algorithms on the left causing the least traffic volume.
Setup
As described in detail on the hardware setup and software setup pages. Using the execution option with docker will automatically do this for you after installing Dockers prerequisites.
Experiment
The following loop variable file contains all KEMs and one SIG for the experiment
Loop variables for all KEMs
kem_alg: ["X25519", "kyber90s512", "kyber512", "kyber90s768", "kyber768", "kyber90s1024", "kyber1024", "bikel1", "bikel3", "hqc128", "hqc192", "hqc256", "p256_kyber512", "p384_kyber768", "p521_kyber1024", "p256_bikel1", "p384_bikel3", "p256_hqc128", "p384_hqc192", "p521_hqc256", "prime256v1", "secp384r1", "secp521r1"]
sig_alg: ["rsa:2048"]
tc: [""]
Loop variables for all SAs
kem_alg: ["X25519"]
sig_alg: ["dilithium2", "dilithium2_aes", "dilithium3", "dilithium3_aes", "dilithium5", "dilithium5_aes", "falcon512", "falcon1024", "sphincsharaka128fsimple", "sphincsharaka192fsimple", "sphincsharaka256fsimple", "p256_dilithium2", "rsa3072_dilithium2", "p384_dilithium3", "p521_dilithium5", "p256_falcon512", "p521_falcon1024", "p256_sphincsharaka128fsimple", "p384_sphincsharaka192fsimple", "p521_sphincsharaka256fsimple","rsa:1024", "rsa:2048", "rsa:3072", "rsa:4096"]
tc: [""]
Next, we start the OpenSSL server on the server node:
Server
#!/bin/bash
set -e
set -x
# Set KEM to one defined in https://github.com/open-quantum-safe/openssl#key-exchange
[[ -z "$KEM_ALG" ]] && echo "Need to set KEM_ALG" && exit 1;
[[ -z "$SIG_ALG" ]] && echo "Need to set SIG_ALG" && exit 1;
if [[ ! -z "$NETEM_TC" ]]; then
PORT=eth0
eval "tc qdisc add dev $PORT root netem $NETEM_TC"
fi
SERVER_IP="$(dig +short server)"
DATETIME=$(date +"%F_%T")
CA_DIR="/opt/shared/"
cd "${OPENSSL_PATH}"/bin
# generate CA key and cert
${OPENSSL} req -x509 -new -newkey "${SIG_ALG}" -keyout CA.key -out CA.crt -nodes -subj "/CN=oqstest CA" -days 365 -config "${OPENSSL_CNF}"
cp CA.crt $CA_DIR
cp CA.crt /out/CA_run${RUN}.crt
cp CA.key /out/CA_run${RUN}.key
SERVER_CRT=/out/server_run${RUN}
# Optionally set server certificate alg to one defined in https://github.com/open-quantum-safe/openssl#authentication
# The root CA's signature alg remains as set when building the image
# generate new server CSR using pre-set CA.key & cert
${OPENSSL} req -new -newkey "${SIG_ALG}" -keyout $SERVER_CRT.key -out $SERVER_CRT.csr -nodes -subj "/CN=$IP"
# generate server cert
${OPENSSL} x509 -req -in $SERVER_CRT.csr -out $SERVER_CRT.crt -CA CA.crt -CAkey CA.key -CAcreateserial -days 365
echo "starting experiment: $(date)"
echo "Server has IP $SERVER_IP"
echo "{\"tc\": \"$NETEM_TC\", \"kem_alg\": \"$KEM_ALG\", \"sig_alg\": \"$SIG_ALG\"}" > "/out/${DATETIME}_server_run${RUN}.loop"
bash -c "tcpdump -w /out/latencies-pre_run${RUN}.pcap dst host $SERVER_IP and dst port 4433" &
TCPDUMP_PID=$!
if [ "$FLAME_GRAPH" = "True" ]
then
echo "will save flame graphs"
FG_FREQUENCY=96
bash -c "perf record -o /out/perf-dut.data -F ${FG_FREQUENCY} -C 1 -g" &
FLAME_GRAPH_PID=$!
fi
# Start a TLS1.3 test server based on OpenSSL accepting only the specified KEM_ALG
bash -c "taskset -c 1 ${OPENSSL} s_server -cert $SERVER_CRT.crt -key $SERVER_CRT.key -curves $KEM_ALG -www -tls1_3 -accept $CLIENT_IP:4433"
sleep 30
kill -2 $TCPDUMP_PID
kill $FLAME_GRAPH_PID
if [ "$FLAME_GRAPH" = "True" ]
then
sleep 30
perf archive /out/perf-client.data
echo "Flame graph data can be found at /out/perf-client.data"
fi
In the last step, we start the OpenSSL client on the client node:
Client
#!/bin/bash
set -e
set -x
# define variables
# Set KEM to one defined in https://github.com/open-quantum-safe/openssl#key-exchange
# ENV variables
[[ -z "$KEM_ALG" ]] && echo "Need to set KEM_ALG" && exit 1;
[[ -z "$SIG_ALG" ]] && echo "Need to set SIG_ALG" && exit 1;
[[ -z "$MEASUREMENT_TIME" ]] && echo "Need to set MEASUREMENT_TIME" && exit 1;
if [[ ! -z "$NETEM_TC" ]]; then
PORT=eth0
eval "tc qdisc add dev $PORT root netem $NETEM_TC"
fi
SERVER_IP="$(dig +short server)"
DATETIME=$(date +"%F_%T")
CA_DIR="/opt/shared"
cd "$OPENSSL_PATH" || exit
echo "Running $0 with SIG_ALG=$SIG_ALG and KEM_ALG=$KEM_ALG"
if [ "$FLAME_GRAPH" = "True" ]; then
echo "Will export flame graphs"
FG_FREQUENCY=96
bash -c "perf record -o /out/perf-client.data -F ${FG_FREQUENCY} -C 1 -g" &
FLAME_GRAPH_PID=$!
fi
bash -c "tcpdump -w /out/latencies-post_run${RUN}.pcap src host $SERVER_IP and src port 4433" &
TCPDUMP_PID=$!
echo "{\"tc\": \"$NETEM_TC\", \"kem_alg\": \"$KEM_ALG\", \"sig_alg\": \"$SIG_ALG\"}" > "/out/${DATETIME}_client_run${RUN}.loop"
sleep 5
# Run handshakes for $TEST_TIME seconds
bash -c "taskset -c 1 ${OPENSSL} s_time -curves $KEM_ALG -connect $SERVER_IP:4433 -new -time $MEASUREMENT_TIME -verify 1 -www '/' -CAfile $CA_DIR/CA.crt > /out/opensslclient_run${RUN}.stdout 2> /out/opensslclient_run${RUN}.stderr"
if [ "$FLAME_GRAPH" = "True" ]
then
kill $FLAME_GRAPH_PID
fi
sleep 30 # Make sure it is finished and written out
kill -2 $TCPDUMP_PID
if [ "$FLAME_GRAPH" = "True" ]
then
perf archive /out/perf-client.data
echo "Flame graph data can be found at /out/perf-client.data"
fi
echo "client finished sending $(date), results can be found at /out/results-openssl-$KEM_ALG-$SIG_ALG.txt"
The client automatically stops after the predefined measurement time, while the timer and the server need to receive a SIG INT signal to terminate.
Experiment with Docker
Clone first the repository and then change your current folder to code folder within the cloned repository.
Execute the following command:
/experiment.py --output-dir /opt/experiments all-kem all-sig
Evaluation
After the experiment, the results are available in the results folder for client, server, and timestamper. In case of Docker, we only have two folders because of our two node setup and need to copy all PCAPs to the timestamper folder. This step can be done using our prepared data from the hardware timestamping or the results from the Docker-based experiment.
Client
Under client_results on the evaluator all files from the client need to be added.
Timestamper
Under timer_results on the evaluator all files PCAPs need to be added.
The evaluation script loads all PCAPs into a postgressql database and analyzes the data. Finally, the scripts generates CSVs with the corresponding data:
Evaluator
#!/bin/bash
set -x
set -e
# First extract all data from the openssl_client results
mkdir /root/client_results/data_client
chmod 0777 /root/client_results
chmod 0777 /root/client_results/data_client
cd /root/client_results
cp /root/experiment-script/code/plotter/csvgenerator_client.py /root/client_results
python3 csvgenerator_client.py data_client .
pos_upload -r /root/client_results/data_client/
# Extracting data from PCAPs
NUM_CORES=$(pos_get_variable num_cores)
mkdir /root/timer_results/data
chmod 0777 /root/timer_results
chmod 0777 /root/timer_results/data
cd /root/timer_results/data
echo "Process PCAPs using ${NUM_CORES} cores"
env --chdir /var/lib/postgresql setpriv --init-groups --reuid postgres -- createuser -s root || true
parallel -j $NUM_CORES "dropdb --if-exists root{\%}; createdb root{\%}; export PGDATABASE=root{\%}; ~/experiment-script/dbscripts/import.sh {}; ~/experiment-script/dbscripts/analysis.sh {}" ::: ../latencies-pre.pcap*.zst
cp -r /root/experiment-script/code/plotter/* /root/timer_results
cd /root/timer_results
mkdir /root/timer_results/figures
python3 plotcreator.py figures data .
make -i
python3 ~/experiment-script/code/dbscripts/run_tls_analyse.py . data
pos_upload -r /root/timer_results/figures/
pos_upload -r /root/timer_results/data/
FLAME_GRAPH=$(pos_get_variable -g flame_graph/execute_server)
if [ "$FLAME_GRAPH" = "True" ]
then
RESULT_PATH="/root/server_results"
mkdir $RESULT_PATH/flame_graph/
FG_REPO_PATH=$(pos_get_variable -g flame_graph/install/repo_path)
for filename in $RESULT_PATH/perf*.data; do
mkdir /root/.debug/
tar xf "$RESULT_PATH/perf-server.data.tar_$(basename ${filename##*_} .data).bz2" -C ~/.debug
perf script -i $filename | $FG_REPO_PATH/stackcollapse-perf.pl > $filename.txt
$FG_REPO_PATH/flamegraph.pl $filename.txt > $filename.svg
mv $filename.txt $RESULT_PATH/flame_graph/
mv $filename.svg $RESULT_PATH/flame_graph/
rm -r /root/.debug/
done
pos_upload -r -o flame_graph_server $RESULT_PATH/flame_graph/
fi
FLAME_GRAPH=$(pos_get_variable -g flame_graph/execute_client)
if [ "$FLAME_GRAPH" = "True" ]
then
RESULT_PATH="/root/client_results"
mkdir $RESULT_PATH/flame_graph/
FG_REPO_PATH=$(pos_get_variable -g flame_graph/install/repo_path)
for filename in $RESULT_PATH/perf*.data; do
mkdir /root/.debug/
tar xf "$RESULT_PATH/perf-client.data.tar_$(basename ${filename##*_} .data).bz2" -C ~/.debug
perf script -i $filename | $FG_REPO_PATH/stackcollapse-perf.pl > $filename.txt
$FG_REPO_PATH/flamegraph.pl $filename.txt > $filename.svg
mv $filename.txt $RESULT_PATH/flame_graph/
mv $filename.svg $RESULT_PATH/flame_graph/
rm -r /root/.debug/
done
pos_upload -r -o flame_graph_client $RESULT_PATH/flame_graph/
fi
The results files are now located in the selected results folder. The following files were used to create the table:
- *.crt: The used certificate
- *client_results.csv: This file contains the number of connections and the possibility to link run number with KEM and SIG.
- *dump-tcp-segments.csv: The number of TCP segments between server and client during the handshake.
- *median.csv: The median latency in different parts of the handshake according to the further description in the file name.
These files are used together with the blackbox_analysis script to generate the shown table. This step is included as final_analyzer in the docker scripts.
Blackbox analysis
#!/bin/bash
cd /root
git clone https://github.com/WiednerF/pqs-tls-measurements experiments
cd /root/experiment/code/blackbox_analysis
pip3 install -t requirements.txt
# Link the corresponding entries to this folder: kem as kem, sig as sig, level1 as level1, level3 as level3, and level5 as level5
python3 create_analysis.py
python3 derivation_analysis.py
Docker Evaluation Execution
To execute the evaluation using docker, docker and its dependencies needs to be installed on the systems. The current working directory must be at the code folder.
Based on previous Docker Experiment
./evaluate.py --output-dir /opt/pqc-analysis /opt/experiments/*
Based on Data from our MediaTUM repository:
rsync -rP rsync://m1725057@dataserv.ub.tum.de/m1725057/all-kem/ /opt/all-kem/
rsync -rP rsync://m1725057@dataserv.ub.tum.de/m1725057/all-sig/ /opt/all-sig/
./evaluate.py --output-dir /opt/pqc-analysis /opt/all*
This executes all evaluation steps.
Scripts and Data
The following repository contains all files and results including raw PCAPs:
- code: The code necessary to execute the experiments.
- Client results under all-kem/client: Contains the outputs of the scripts executed on the client.
- Server results under all-kem/server: Contains the outputs of the scripts executed on the server.
- Timestamper results under all-kem/timestamper: Contains the PCAPs of the experiment.
- Client results under all-sig/client: Contains the outputs of the scripts executed on the client.
- Server results under all-sig/server: Contains the outputs of the scripts executed on the server.
- Timestamper results under all-sig/timestamper: Contains the PCAPs of the experiment.