Figure 3
Deviation between predicted and measured latency for KEM-SIG combinations on the same level. A positive numbers means faster-than-predicted.
Full Figures for absolut deviation (Figures 3a and 3b)
Full Figure for latency improvements (Figure 3c)
Full Tables per Level
Setup
As described in detail on the hardware setup and software setup pages. Using the execution option with docker will automatically do this for you after installing Dockers prerequisites.
Experiment
The following loop variable file contains all KAs and SAs for the experiment. However, each level needs to be executed separately:
Loop variables for levels 1 and 2
kem_alg: ["X25519", "kyber90s512", "kyber512", "bikel1", "hqc128", "p256_kyber512", "p256_bikel1", "p256_hqc128", "prime256v1"]
sig_alg: ["dilithium2", "dilithium2_aes", "falcon512", "sphincsharaka128fsimple", "p256_dilithium2", "rsa3072_dilithium2", "p256_falcon512", "p256_sphincsharaka128fsimple", "rsa:2048", "rsa:1024", "rsa:2048", "rsa:3072", "rsa:4096"]
tc: [""]
Loop variables for level 3
kem_alg: ["X25519", "kyber90s768", "kyber768", "bikel3", "hqc192", "p384_kyber768", "p384_bikel3", "p384_hqc192", "secp384r1"]
sig_alg: ["dilithium3", "dilithium3_aes", "sphincsharaka192fsimple", "p384_dilithium3", "p384_sphincsharaka192fsimple", "rsa:4096", "rsa:2048"]
tc: [""]
Loop variables for level 5
kem_alg: ["X25519", "kyber90s1024", "kyber1024", "hqc256", "p521_kyber1024", "p521_hqc256", "secp521r1"]
sig_alg: ["dilithium5", "dilithium5_aes", "falcon1024", "sphincsharaka256fsimple", "p521_dilithium5", "p521_falcon1024","p521_sphincsharaka256fsimple", "rsa:2048", "rsa:4096"]
tc: [""]
Next, we start the OpenSSL server on the server node:
Server
#!/bin/bash
set -e
set -x
# Set KEM to one defined in https://github.com/open-quantum-safe/openssl#key-exchange
[[ -z "$KEM_ALG" ]] && echo "Need to set KEM_ALG" && exit 1;
[[ -z "$SIG_ALG" ]] && echo "Need to set SIG_ALG" && exit 1;
if [[ ! -z "$NETEM_TC" ]]; then
PORT=eth0
eval "tc qdisc add dev $PORT root netem $NETEM_TC"
fi
SERVER_IP="$(dig +short server)"
DATETIME=$(date +"%F_%T")
CA_DIR="/opt/shared/"
cd "${OPENSSL_PATH}"/bin
# generate CA key and cert
${OPENSSL} req -x509 -new -newkey "${SIG_ALG}" -keyout CA.key -out CA.crt -nodes -subj "/CN=oqstest CA" -days 365 -config "${OPENSSL_CNF}"
cp CA.crt $CA_DIR
cp CA.crt /out/CA_run${RUN}.crt
cp CA.key /out/CA_run${RUN}.key
SERVER_CRT=/out/server_run${RUN}
# Optionally set server certificate alg to one defined in https://github.com/open-quantum-safe/openssl#authentication
# The root CA's signature alg remains as set when building the image
# generate new server CSR using pre-set CA.key & cert
${OPENSSL} req -new -newkey "${SIG_ALG}" -keyout $SERVER_CRT.key -out $SERVER_CRT.csr -nodes -subj "/CN=$IP"
# generate server cert
${OPENSSL} x509 -req -in $SERVER_CRT.csr -out $SERVER_CRT.crt -CA CA.crt -CAkey CA.key -CAcreateserial -days 365
echo "starting experiment: $(date)"
echo "Server has IP $SERVER_IP"
echo "{\"tc\": \"$NETEM_TC\", \"kem_alg\": \"$KEM_ALG\", \"sig_alg\": \"$SIG_ALG\"}" > "/out/${DATETIME}_server_run${RUN}.loop"
bash -c "tcpdump -w /out/latencies-pre_run${RUN}.pcap dst host $SERVER_IP and dst port 4433" &
TCPDUMP_PID=$!
if [ "$FLAME_GRAPH" = "True" ]
then
echo "will save flame graphs"
FG_FREQUENCY=96
bash -c "perf record -o /out/perf-dut.data -F ${FG_FREQUENCY} -C 1 -g" &
FLAME_GRAPH_PID=$!
fi
# Start a TLS1.3 test server based on OpenSSL accepting only the specified KEM_ALG
bash -c "taskset -c 1 ${OPENSSL} s_server -cert $SERVER_CRT.crt -key $SERVER_CRT.key -curves $KEM_ALG -www -tls1_3 -accept $CLIENT_IP:4433"
sleep 30
kill -2 $TCPDUMP_PID
kill $FLAME_GRAPH_PID
if [ "$FLAME_GRAPH" = "True" ]
then
sleep 30
perf archive /out/perf-client.data
echo "Flame graph data can be found at /out/perf-client.data"
fi
In the last step, we start the OpenSSL client on the client node:
Client
#!/bin/bash
set -e
set -x
# define variables
# Set KEM to one defined in https://github.com/open-quantum-safe/openssl#key-exchange
# ENV variables
[[ -z "$KEM_ALG" ]] && echo "Need to set KEM_ALG" && exit 1;
[[ -z "$SIG_ALG" ]] && echo "Need to set SIG_ALG" && exit 1;
[[ -z "$MEASUREMENT_TIME" ]] && echo "Need to set MEASUREMENT_TIME" && exit 1;
if [[ ! -z "$NETEM_TC" ]]; then
PORT=eth0
eval "tc qdisc add dev $PORT root netem $NETEM_TC"
fi
SERVER_IP="$(dig +short server)"
DATETIME=$(date +"%F_%T")
CA_DIR="/opt/shared"
cd "$OPENSSL_PATH" || exit
echo "Running $0 with SIG_ALG=$SIG_ALG and KEM_ALG=$KEM_ALG"
if [ "$FLAME_GRAPH" = "True" ]; then
echo "Will export flame graphs"
FG_FREQUENCY=96
bash -c "perf record -o /out/perf-client.data -F ${FG_FREQUENCY} -C 1 -g" &
FLAME_GRAPH_PID=$!
fi
bash -c "tcpdump -w /out/latencies-post_run${RUN}.pcap src host $SERVER_IP and src port 4433" &
TCPDUMP_PID=$!
echo "{\"tc\": \"$NETEM_TC\", \"kem_alg\": \"$KEM_ALG\", \"sig_alg\": \"$SIG_ALG\"}" > "/out/${DATETIME}_client_run${RUN}.loop"
sleep 5
# Run handshakes for $TEST_TIME seconds
bash -c "taskset -c 1 ${OPENSSL} s_time -curves $KEM_ALG -connect $SERVER_IP:4433 -new -time $MEASUREMENT_TIME -verify 1 -www '/' -CAfile $CA_DIR/CA.crt > /out/opensslclient_run${RUN}.stdout 2> /out/opensslclient_run${RUN}.stderr"
if [ "$FLAME_GRAPH" = "True" ]
then
kill $FLAME_GRAPH_PID
fi
sleep 30 # Make sure it is finished and written out
kill -2 $TCPDUMP_PID
if [ "$FLAME_GRAPH" = "True" ]
then
perf archive /out/perf-client.data
echo "Flame graph data can be found at /out/perf-client.data"
fi
echo "client finished sending $(date), results can be found at /out/results-openssl-$KEM_ALG-$SIG_ALG.txt"
The client automatically stops after the predefined measurement time, while the timer and the server need to receive a SIG INT signal to terminate.
Experiment with Docker
Clone first the repository and then change your current folder to code folder within the cloned repository.
Execute the following command:
./experiment.py --output-dir /opt/experiments level1 level3 level5
Evaluation
After the experiment, the results are available in the results folder for client, server, and timestamper. In case of Docker, we only have two folders because of our two node setup and need to copy all PCAPs to the timestamper folder. This step can be done using our prepared data from the hardware timestamping or the results from the Docker-based experiment.
Client
Under client_results on the evaluator all files from the client need to be added.
Timestamper
Under timer_results on the evaluator all files PCAPs need to be added.
The evaluation script loads all PCAPs into a postgressql database and analyzes the data. Finally, the scripts generates CSVs with the corresponding data:
Evaluator
#!/bin/bash
set -x
set -e
# First extract all data from the openssl_client results
mkdir /root/client_results/data_client
chmod 0777 /root/client_results
chmod 0777 /root/client_results/data_client
cd /root/client_results
cp /root/experiment-script/code/plotter/csvgenerator_client.py /root/client_results
python3 csvgenerator_client.py data_client .
pos_upload -r /root/client_results/data_client/
# Extracting data from PCAPs
NUM_CORES=$(pos_get_variable num_cores)
mkdir /root/timer_results/data
chmod 0777 /root/timer_results
chmod 0777 /root/timer_results/data
cd /root/timer_results/data
echo "Process PCAPs using ${NUM_CORES} cores"
env --chdir /var/lib/postgresql setpriv --init-groups --reuid postgres -- createuser -s root || true
parallel -j $NUM_CORES "dropdb --if-exists root{\%}; createdb root{\%}; export PGDATABASE=root{\%}; ~/experiment-script/dbscripts/import.sh {}; ~/experiment-script/dbscripts/analysis.sh {}" ::: ../latencies-pre.pcap*.zst
cp -r /root/experiment-script/code/plotter/* /root/timer_results
cd /root/timer_results
mkdir /root/timer_results/figures
python3 plotcreator.py figures data .
make -i
python3 ~/experiment-script/code/dbscripts/run_tls_analyse.py . data
pos_upload -r /root/timer_results/figures/
pos_upload -r /root/timer_results/data/
FLAME_GRAPH=$(pos_get_variable -g flame_graph/execute_server)
if [ "$FLAME_GRAPH" = "True" ]
then
RESULT_PATH="/root/server_results"
mkdir $RESULT_PATH/flame_graph/
FG_REPO_PATH=$(pos_get_variable -g flame_graph/install/repo_path)
for filename in $RESULT_PATH/perf*.data; do
mkdir /root/.debug/
tar xf "$RESULT_PATH/perf-server.data.tar_$(basename ${filename##*_} .data).bz2" -C ~/.debug
perf script -i $filename | $FG_REPO_PATH/stackcollapse-perf.pl > $filename.txt
$FG_REPO_PATH/flamegraph.pl $filename.txt > $filename.svg
mv $filename.txt $RESULT_PATH/flame_graph/
mv $filename.svg $RESULT_PATH/flame_graph/
rm -r /root/.debug/
done
pos_upload -r -o flame_graph_server $RESULT_PATH/flame_graph/
fi
FLAME_GRAPH=$(pos_get_variable -g flame_graph/execute_client)
if [ "$FLAME_GRAPH" = "True" ]
then
RESULT_PATH="/root/client_results"
mkdir $RESULT_PATH/flame_graph/
FG_REPO_PATH=$(pos_get_variable -g flame_graph/install/repo_path)
for filename in $RESULT_PATH/perf*.data; do
mkdir /root/.debug/
tar xf "$RESULT_PATH/perf-client.data.tar_$(basename ${filename##*_} .data).bz2" -C ~/.debug
perf script -i $filename | $FG_REPO_PATH/stackcollapse-perf.pl > $filename.txt
$FG_REPO_PATH/flamegraph.pl $filename.txt > $filename.svg
mv $filename.txt $RESULT_PATH/flame_graph/
mv $filename.svg $RESULT_PATH/flame_graph/
rm -r /root/.debug/
done
pos_upload -r -o flame_graph_client $RESULT_PATH/flame_graph/
fi
The results files are now located in the selected results folder. The following files were used to create the table:
- *.crt: The used certificate
- *client_results.csv: This file contains the number of connections and the possibility to link run number with KEM and SIG.
- *dump-tcp-segments.csv: The number of TCP segments between server and client during the handshake.
- *median.csv: The median latency in different parts of the handshake according to the further description in the file name.
These files are used together with the blackbox_analysis script to generate the shown table. This step is included as final_analyzer in the docker scripts.
Blackbox analysis
#!/bin/bash
cd /root
git clone https://github.com/WiednerF/pqs-tls-measurements experiments
cd /root/experiment/code/blackbox_analysis
pip3 install -t requirements.txt
# Link the corresponding entries to this folder: kem as kem, sig as sig, level1 as level1, level3 as level3, and level5 as level5
python3 create_analysis.py
python3 derivation_analysis.py
After the blackbox-analysis, using the following script calculates the deviation results. It expects the level fields in the corresponding CSVs which were generated through the blackbox analysis:
Deviation analysis
#!/usr/bin/env python3
import math
import pathlib
import click
import pandas as pd
import numpy as np
def retrieve_sig_data(sig_algorithm_file, default_sig):
d_sig = pd.read_csv(sig_algorithm_file, usecols=['level', 'kem', 'sig', 'partAllMedian'])
d_sig = d_sig[d_sig['kem'] == 'X25519']
sig_data = d_sig[d_sig['sig'] == default_sig]['partAllMedian']
return d_sig, sig_data.iloc[0]
def retrieve_kem_data(kem_algorithm_file, default_kem):
d_kem = pd.read_csv(kem_algorithm_file, usecols=['level', 'kem', 'sig', 'partAllMedian'])
d_kem = d_kem[d_kem['sig'] == 'rsa:2048']
kem_data = d_kem[d_kem['kem'] == default_kem]['partAllMedian']
return d_kem, kem_data.iloc[0]
def retrieve_cross_data(cross_algorithm_file, default_sig, default_kem):
d_level = pd.read_csv(cross_algorithm_file, usecols=['level', 'kem', 'sig', 'partAllMedian'])
unqiue_kem = d_level['kem'].unique()
unqiue_sig = d_level['sig'].unique()
def add_unique_kem(row):
row['num_sig'] = np.where(unqiue_sig == row['sig'])[0][0] + 1
row['num_kem'] = np.where(unqiue_kem == row['kem'])[0][0] + 1
return row
return d_level.apply(lambda row: add_unique_kem(row), axis=1), float(d_level[(d_level['kem'] == default_kem) & (d_level['sig'] == default_sig)]['partAllMedian'])
def add_expectation(data, sig, kem, baseline):
def add_expected_column(row):
sig_part = sig[sig['sig'] == row['sig']]['partAllMedian']
kem_part = kem[kem['kem'] == row['kem']]['partAllMedian']
row['expected'] = float(sig_part.iloc[0]) + float(kem_part.iloc[0]) - baseline
row['variance'] = round(row['expected'] - row['partAllMedian'])
row['percent'] = round(row['variance']/row['expected'] * 100)
return row
data['expected'] = 0
return data.apply(lambda row: add_expected_column(row), axis=1)
@click.command()
@click.argument('from-file', required=True, type=click.Path(exists=True, dir_okay=False, file_okay=True, path_type=pathlib.Path))
def main(from_file: pathlib.Path):
sig_algorithm = from_file
cross = from_file
kem_algorithm = from_file
default_sig = "rsa:2048"
default_kem = "X25519"
d_sig, baseline1 = retrieve_sig_data(sig_algorithm, default_sig)
d_kem, baseline2 = retrieve_kem_data(kem_algorithm, default_kem)
d_level, baseline3 = retrieve_cross_data(cross, default_sig, default_kem)
d_result = add_expectation(d_level, d_sig, d_kem, baseline1)
print(d_result.to_csv(index_label="index"))
if __name__ == '__main__':
main()
Docker Evaluation Execution
To execute the evaluation using docker, docker and its dependencies needs to be installed on the systems. The current working directory must be at the code folder.
Based on previous Docker Experiment
./evaluate.py --output-dir /opt/pqc-analysis /opt/experiments/*
Based on Data from our MediaTUM repository:
rsync -rP rsync://m1725057@dataserv.ub.tum.de/m1725057/level1/ /opt/level1/
rsync -rP rsync://m1725057@dataserv.ub.tum.de/m1725057/level3/ /opt/level3/
rsync -rP rsync://m1725057@dataserv.ub.tum.de/m1725057/level5/ /opt/level5/
rsync -rP rsync://m1725057@dataserv.ub.tum.de/m1725057/level1-nopush/ /opt/level1-nopush/
rsync -rP rsync://m1725057@dataserv.ub.tum.de/m1725057/level3-nopush/ /opt/level3-nopush/
rsync -rP rsync://m1725057@dataserv.ub.tum.de/m1725057/level5-nopush/ /opt/level5-nopush/
./evaluate.py --deviation-analysis True --output-dir /opt/pqc-analysis /opt/level*
This executes all evaluation steps.
Scripts and Data
The following repository contains all files and results including raw PCAPs:
- code: The code necessary to execute the experiments.
- Client results under level1/client: Contains the outputs of the scripts executed on the client for level 1-2.
- Server results under level1/server: Contains the outputs of the scripts executed on the server for level 1-2.
- Timestamper results under level1/timestamper: Contains the PCAPs of the experiment from level 1-2.
- Client results under level3/client: Contains the outputs of the scripts executed on the client for level 3.
- Server results under level3/server: Contains the outputs of the scripts executed on the server for level 3.
- Timestamper results under level3/timestamper: Contains the PCAPs of the experiment from level 3.
- Client results under level5/client: Contains the outputs of the scripts executed on the client for level 5.
- Server results under level5/server: Contains the outputs of the scripts executed on the server for level 5.
- Timestamper results under level5/timestamper: Contains the PCAPs of the experiment from level 5.
And the following folders contain the results with the original OpenSSL version from Open Quantum Safe project without modification of the push behavior:
- Client results under level1-nopush/client: Contains the outputs of the scripts executed on the client for level 1-2.
- Server results under level1-nopush/server: Contains the outputs of the scripts executed on the server for level 1-2.
- Timestamper results under level1-nopush/timestamper: Contains the PCAPs of the experiment from level 1-2.
- Client results under level3-nopush/client: Contains the outputs of the scripts executed on the client for level 3.
- Server results under level3-nopush/server: Contains the outputs of the scripts executed on the server for level 3.
- Timestamper results under level3-nopush/timestamper: Contains the PCAPs of the experiment from level 3.
- Client results under level5-nopush/client: Contains the outputs of the scripts executed on the client for level 5.
- Server results under level5-nopush/server: Contains the outputs of the scripts executed on the server for level 5.
- Timestamper results under level5-nopush/timestamper: Contains the PCAPs of the experiment from level 5.