-
Notifications
You must be signed in to change notification settings - Fork 18
Port network metrics #381
Port network metrics #381
Conversation
Move the remaining network metrics scripts from the wip directory in preparation for porting them. Signed-off-by: Brett T. Warden <[email protected]>
Implement results reporting to network-metrics.sh by calling save_results(). Also reworking data gathering to eliminate multiple subshells. Signed-off-by: Brett T. Warden <[email protected]>
Add reporting via send_results() to network-metrics-cpu-consumption.sh. Optimize results gathering. Signed-off-by: Brett T. Warden <[email protected]>
Enable reporting via save_results() for network-metrics-iperf3.sh. Also rework parsing of results log files to reduce subshells. Signed-off-by: Brett T. Warden <[email protected]>
Add results reporting for network-metrics-nuttcp.sh via save_results(). Rework data extraction from results files to reduce subshells. Signed-off-by: Brett T. Warden <[email protected]>
Add results reporting to network-nginx-ab-benchmark.sh via save_results(). Rework results gathering to use fewer subshells. Signed-off-by: Brett T. Warden <[email protected]>
Add results reporting via save_results() for: network-metrics-memory-pss-1g.sh network-metrics-memory-pss.sh network-metrics-memory-rss-1g.sh Remove unnecessary tail from output parsing. Unify smem command generation. Add --no-header to smem command line to prevent header line from being counted in average denominator. This was typically causing reported results to be 1/2 of correct value. All 3 scripts now report into a single results file (network-metrics-memory.csv), using parameters field for differentiation. Signed-off-by: Brett T. Warden <[email protected]>
Add the now-ported network metrics tests to the run_all_metrics.sh script. Fixes clearcontainers#149. Signed-off-by: Brett T. Warden <[email protected]>
b8a9672
to
b49d7e9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few notes, but nothing that I'm going to block the merge for.
Some things we probably need to improve over time though.
@MarioCarrilloA - can you review, particularly note my comment about multiple results in a file and checkmetrics.
ps --no-headers -o %cpu -p "$qemu_pids" > "$result" | ||
sed -i 's/ //g' "$result" | ||
local total_cpu_consumption=$(awk '{ total += $1 } END { print total/NR }' "$result") | ||
local total_cpu_consumption=$(ps --no-headers -o %cpu \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
much nicer without the temp file, thanks!
cp -p $total_requests result.test | ||
|
||
local result=$(grep "^Requests per second" $total_requests) | ||
[[ $result =~ :[[:blank:]]*([[:digit:]]+(\.[[:digit:]]*)?) ]] && rps=${BASH_REMATCH[1]} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Argh, my eyes! ;-) Well, OK, it is more efficient than the pipelined version, and in some ways more understandable as it pretty much lays out the format that it is parsing...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I avoided leaning toothpicks :) I considered pasting sample data in a comment above, but realized I'd probably have to sanitize that since it's actual benchmarking results.
@@ -64,11 +64,13 @@ function cpu_consumption { | |||
echo >&2 "WARNING: sleeping for $middle_time seconds in order to have server and client stable" | |||
sleep ${middle_time} | |||
qemu_pids=$(pidof ${QEMU_PATH}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, a note/thought. Probably not going to block this for the minute, but...
I ran the tests initially with RUNTIME=runc
, because my local CC install may have been a little in-flux... and of course runc
does not have any QEMU processes, so we actually bomb or fail some of the tests (like when we try to do a ps
of the pids
of QEMU, we fail the ps
, and then we get a divide by zero error etc.
So, later maybe, we should make the scripts spot that there is no QEMU, and gracefully-ish fail.
That, or we make run_all_metrics.sh
more intelligent about which tests can be run for which runtimes, maybe.
But, for now, we'd really like to be running these tests with CC3.x, so I will install and test that...
"$total_bidirectional_server_bandwidth" \ | ||
"$total_bidirectional_server_bandwidth_units" | ||
|
||
save_results "${test_name}" "network bidir bw client to server" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, and here I see what you did with the multi-save of results into a single file... I think that is OK, but you are 'pushing the boundaries' here of what we have in place, so, some notes:
- use of the
Args
field is OK. That was not its original intention (it was to store the args the test had used...), but, it is not really being used or useful for that right now - so, I'd call that acceptable... - placing multiple mixed results in a single results file will almost definitely break https://github.com/clearcontainers/tests/tree/master/cmd/checkmetrics parsing of those files... I think we can do an interim fix (filter the CSV files with
grep
for instance right now to get the results we need to check), and then addArgs
filtering tocheckmetrics
.
So, not the end of the world, and a pragmatic step forwards in holding mixed and varied results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it makes more sense to put each result in an independent file, that's fine, and pretty easy to accomplish. I'd just take the unique info out of Args and append it to Test Name. Let me know if you'd prefer that, or adding another field, etc...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've mulled over this a bit. I'm going to make the call to keep the data together for the minute in a single file. Now that you extract more of the results from some of the tests, I think if we put those into separate files then there is some risk in the future that we'd end up with a set of files and no proof that the data came from the same test run (well, OK, maybe timestamps and commit IDs could give a heavy clue... still...). So, let's leave as is, and then we will fix up checkmetrics
and related other scripts to pull out the data they are looking for.
going to mark DNM until Mario has akced he's had a look |
Oh, and I'll note, I'm just checking that those 'warning, suffers from Issue nnn' warnings are still valid, since they stem from CC2.x, and this merge is CC3.x... |
@MarioCarrilloA - I'm going to merge this - take note of the 'multiple results in a single file'. |
tests: Skip kubernetes and openshift tests for initrd.
I've moved the remaining scripts back out of the wip subdir, added results reporting to each one, and fixed a few bugs I've spotted, particularly with respect to results gathering. Finally, I've added the remaining tests to the run_all_metrics.sh script.
Some tests, e.g. network-metrics-iperf3.sh, report multiple types of data. At this point, I've reported them all into the same results file, using the Args field to discriminate between them. If it's preferred, I could direct each data type to a different file. Similarly, I've combined the results from the network-metrics-memory* tests (PSS, RSS) into a single results file, but I could break that up if preferred.