Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

General benchmarking framework #86

Merged
merged 9 commits into from
Dec 29, 2023
Merged
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
*.gif
*.mp4
*.dat
*.json
*.pdf
*.vti
*.pvd
Expand Down
6 changes: 6 additions & 0 deletions benchmark/Project.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[deps]
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
KernelAbstractions = "63c18a36-062a-441e-b654-da1e3ab1ce7c"
PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
WaterLily = "ed894a53-35f9-47f1-b17f-85db9237eebd"
162 changes: 162 additions & 0 deletions benchmark/benchmark.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
#!/bin/bash
# ---- Automatic benchmark generation script
# Allows to generate benchmark across different julia versions, backends, cases, and cases sizes.
# juliaup is required: https://github.com/JuliaLang/juliaup
#
# Accepted arguments are (parenthesis for short version):
# - Backend arguments: --version(-v), --backends(-b) --threads(-t) [Julia version, backend types, number of threads (for Array backend)]
# These arguments accept a list of different parameters, for example:
# -v "1.8.5 1.9.4" -b "Array CuArray" -t "1 6"
# which would generate benchmark for all these combinations of parameters.
# - Case arguments: --cases(-c), --log2p(-p), --max_steps(-s), --ftype(-ft) [Benchmark case file, case sizes, number of time steps, float data type]
# The following arguments would generate benchmarks for the "tgv.jl" case:
# -c "tgv.jl" -p "5,6,7" -s 100 -ft "Float32"
# which in addition to the benchmark arguments, altogether can be used to launch this script as:
# sh benchmark.sh -v "1.8.5 1.9.4" -b "Array CuArray" -t "1 6" -c "tgv.jl" -p "5,6,7" -s 100 -ft "Float32"
# Case arguments accept a list of parameters for each case, and the list index is shared across these arguments (hence lists must have equal length):
# -c "tgv.jl donut.jl" -p "5,6,7 7,8" -s "100 500" -ft "Float32 Float64"
# which would run the same benchmarks for the TGV as before, and benchmarks for the donut case too resulting into
# 2 Julia versions x (2 Array + 1 CuArray) backends x (3 TGV sizes + 2 donut sizes) = 30 benchmarks
#
# Benchmarks are saved in JSON format with the following nomenclature:
# casename_sizes_maxsteps_ftype_backend_waterlilyHEADhash_juliaversion.json
# Benchmarks can be finally compared using compare.jl as follows
# julia --project compare.jl benchmark_1.json benchmark_2.json benchmark_3.json ...
# Note that each case benchmarks should be compared separately.
# If a single case is benchmarked, and all the JSON files in the current directory belong to it, one can simply run:
# julia --project compare.jl $(find . -name "*.json" -printf "%T@ %Tc %p\n" | sort -n | awk '{print $8}')
# which would take all the JSON files, sort them by creation time, and pass them as arguments to the compare.jl program.
# Finally, note that the first benchmark passed as argument is taken as reference to compute speedups of other benchmarks:
# speedup_x = time(benchmark_1) / time(benchmark_x).
#
# TL;DR: Usage example
# sh benchmark.sh -v "1.9.4 1.10.0-rc1" -t "1 3 6" -b "Array CuArray" -c "tgv.jl" -p "5,6,7"
# The default launch is equivalent to:
# sh benchmark.sh -v JULIA_DEFAULT -t "1 6" -b "Array CuArray" -c "tgv.jl" -p "5,6,7" -s 100 -ft Float32
# ----


# Grep current julia version
julia_version () {
julia_v=($(julia -v))
echo "${julia_v[2]}"
}

# Update project environment with new Julia version
update_environment () {
echo "Updating environment to Julia v$version"
juliaup default $version
b-fg marked this conversation as resolved.
Show resolved Hide resolved
# Mark WaterLily as a development package. Then update dependencies and precompile.
julia --project -e "using Pkg; Pkg.develop(PackageSpec(path=join(split(pwd(), '/')[1:end-1], '/'))); Pkg.update();"
b-fg marked this conversation as resolved.
Show resolved Hide resolved
}

run_benchmark () {
echo "Running: julia --project $args"
julia --project $args
}

# Print benchamrks info
display_info () {
echo "--------------------------------------"
echo "Running benchmark tests for:
- Julia: ${VERSIONS[@]}
- Backends: ${BACKENDS[@]}"
if [[ " ${BACKENDS[*]} " =~ [[:space:]]'Array'[[:space:]] ]]; then
echo " - CPU threads: ${THREADS[@]}"
fi
echo " - Cases: ${CASES[@]}
- Size: ${LOG2P[@]:0:$NCASES}
- Sim. steps: ${MAXSTEPS[@]:0:$NCASES}
- Data type: ${FTYPE[@]:0:$NCASES}"
echo "--------------------------------------"; echo
}

# Default backends
DEFAULT_JULIA_VERSION=$(julia_version)
b-fg marked this conversation as resolved.
Show resolved Hide resolved
VERSION=($DEFAULT_JULIA_VERSION)
BACKENDS=('Array' 'CuArray')
THREADS=('1' '6')
# Default cases. Arrays below must be same length (specify each case individually)
CASES=('tgv.jl')
LOG2P=('5,6,7')
MAXSTEPS=('100')
FTYPE=('Float32')

# Parse arguments
while [ $# -gt 0 ]; do
case "$1" in
--versions|-v)
VERSIONS=($2)
shift
;;
--backends|-b)
BACKENDS=($2)
shift
;;
--threads|-t)
THREADS=($2)
shift
;;
--cases|-c)
CASES=($2)
shift
;;
--log2p|-p)
LOG2P=($2)
shift
;;
--max_steps|-s)
MAXSTEPS=($2)
shift
;;
--float_type|-ft)
FTYPE=($2)
shift
;;
*)
printf "ERROR: Invalid argument\n"
b-fg marked this conversation as resolved.
Show resolved Hide resolved
exit 1
esac
shift
done

NCASES=${#CASES[@]}

# Assert "--threads" argument is not empy if "Array" backend is present
if [[ " ${BACKENDS[*]} " =~ [[:space:]]'Array'[[:space:]] ]]; then
if [ "${#THREADS[@]}" == 0 ]; then
echo "ERROR: Backend 'Array' is present, but '--threads' argument is empty."
exit 1
fi
fi

# Display information
display_info

# Benchmarks
for version in "${VERSIONS[@]}" ; do
echo "Julia v$version benchmaks"
update_environment
for i in "${!CASES[@]}"; do
args_case="${CASES[$i]} --log2p="${LOG2P[$i]}" --max_steps=${MAXSTEPS[$i]} --ftype=${FTYPE[$i]}"
for backend in "${BACKENDS[@]}" ; do
if [ "${backend}" == "Array" ]; then
for thread in "${THREADS[@]}" ; do
args="-t $thread "$args_case" --backend=$backend"
run_benchmark
done
else
args=$args_case" --backend=$backend"
run_benchmark
fi
done
done
done

# To compare all the benchmarks in this directory, run
# julia --project compare.jl $(find . -name "*.json" -printf "%T@ %Tc %p\n" | sort -n | awk '{print $8}')

# Restore julia system version to default one and exit
juliaup default $DEFAULT_JULIA_VERSION
echo "All done!"
exit 0
b-fg marked this conversation as resolved.
Show resolved Hide resolved
25 changes: 25 additions & 0 deletions benchmark/compare.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
using BenchmarkTools, PrettyTables

# Load benchmarks
benchmarks = [BenchmarkTools.load(f)[1] for f in ARGS]
# Get backends string vector and assert same case sizes for the different backends
backends_str = [String.(k)[1] for k in keys.(benchmarks)]
log2p_str = [String.(keys(benchmarks[i][backend_str])) for (i, backend_str) in enumerate(backends_str)]
@assert length(unique(log2p_str)) == 1
# Assuming the case and tested function is the same in all benchmarks, we grab their name
case, f_test = benchmarks[1].tags[1:2]
# Get data for PrettyTables
header = ["Backend", "WaterLily", "Julia", "Precision", "Allocations", "GC [%]", "Time [s]", "Speed-up"]
data, base_speedup = Matrix{Any}(undef, length(benchmarks), length(header)), 1.0
printstyled("Benchmark environment: $case $f_test (max_steps=$(benchmarks[1].tags[4]))\n", bold=true)
for n in log2p_str[1]
printstyled("▶ log2p = $n\n", bold=true)
for (i, benchmark) in enumerate(benchmarks)
datap = benchmark[backends_str[i]][n][f_test]
speedup = i == 1 ? 1.0 : benchmarks[1][backends_str[1]][n][f_test].times[1] / datap.times[1]
data[i, :] .= [backends_str[i], benchmark.tags[end-1], benchmark.tags[end], benchmark.tags[end-3],
datap.allocs, (datap.gctimes[1] / datap.times[1]) * 100.0, datap.times[1] / 1e9, speedup]
end
pretty_table(data; header=header, header_alignment=:c, formatters=ft_printf("%.2f", [6,7,8]))
end

107 changes: 0 additions & 107 deletions benchmark/donut/donut.jl

This file was deleted.

52 changes: 0 additions & 52 deletions benchmark/donut/donut_serial.jl

This file was deleted.

4 changes: 0 additions & 4 deletions benchmark/mom_step/Project.toml

This file was deleted.

Loading
Loading