-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Init benchmark tests #10
Init benchmark tests #10
Conversation
a564523
to
93b8076
Compare
Since the network condition varies a lot from time to time, the result for cross tests makes more sense.
REST test:
Cross test:
For gRPC v1.12.0-dev(current master, build from the source)
|
4096ca3
to
e18c8f8
Compare
@ZhouyihaiDing Thanks for this! I'll take a look today or likely tomorrow! |
7df526b
to
37e7dd1
Compare
I also get the latency result for the first RPC under PHP-FPM mode.
first RPC under PHP-CLI mode:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also add numbers for the end to end latency from initializing the client to the actual RPC?
This is typical to PHP, but in PHP, everything starts from scratch for every requests, so that number matters more.
First RPC latency including create Client:
CLI
|
tests/qps/composer.json
Outdated
@@ -0,0 +1,5 @@ | |||
{ | |||
"require": { | |||
"google/cloud": "^0.56.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to have these requirements?
"ext-json": "*",
"ext-protobuf": "*"
or something like that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Thanks!
@ZhouyihaiDing Thanks! Give me one more day |
Oh, does it also make sense to batch 1000 messages? |
Is that the same as increasing the payload from 1 Byte to ~1k Bytes? For measuring the RPC latency/QPS, In addition, we already involve the factor difference which REST uses json while gRPC uses protobuf. If batch 1000 logs, each RPC actually contains time preparing 1000 times of data processing. Eg, if the protobuf is much slower than json, it will enlarge the third party factor by 1000 times, which leads to the incorrect latency/qps result. I comment this line and do the tests for 10k payload:
|
Hi @tmatsuo , Result for 10k payload:
By commenting this 2 lines, I think the only difference for two client is that gRPC client will create a protobuf object. Average latency for protobuf and json are Result for 1 Byte payload:
Time diff for creating the protobuf project for 1000 batches Since you are more familiar with google-cloud-php, can you provide some suggestions about comparing the performance with data preparing without sending RPCs? And still I think set batch size to 1 can measuring the latency/RPCs more precisely considering the factor above because only takes |
Below are logs per second(lps) instead of query per second(qps): Batchsize = 1000
Batchsize = 100
Batchsize = 10
Batchsize = 1
|
The cloud logging library has a performance optimization of batching the logs up to 1000 under the cover, so actually it matters the end user experiences. I know where you come from, and I'm ok without having 1000 batch benchmark. However, I think it is also good for you to show the number for 1000 batch because it explains why our logging library is slow with grpc transport and protobuf (you can say protobuf is the culprit ;). |
tests/qps/composer.json
Outdated
"require": { | ||
"google/cloud": "^0.62.0", | ||
"ext-json": "*", | ||
"ext-protobuf": "*" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also add "ext-curl"?
@tmatsuo . I don't know how to explain that. Is it possible that rest client will re-use some caches created from grpc client? |
Or curl does some optimization, reuse some cache when messages been sent are always the same. |
$grpcLogger->info($msg_grpc); | ||
array_push($grpc_latency_array, microtime(true) - $grpc_start_time); | ||
|
||
$msg_rest = generate_string($payload); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By generating different msg instead of using the same msg "a"
in every iteration, rest becomes slower.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. I'll verify this too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can reproduce it in all 5 GCE machines with 1/2/4/8/16 cores.
By the way, I install the php by apt-get
, as well as ext-curl by apt-get install php-curl
.
I just set 1k to be the default payload and am going to re-test again. With 30s warmup and 600s benchmark time.
Hi @tmatsuo . Does this pull request look good to you? I update the default payload to 1kB and extend the default benchmark time. The result in the document is updated, can you take a look when you have time and see if the data make sense? Thank you! |
I don't see any tests with larger batch size, but as I said before it LGTM |
It only has LoggingClient for now.
For Logging API tests, it has 3 tests: send gRPC rpc, send REST rpc, send gRPC/REST rpc crossly.
run_test.sh
may later be changed torun_test.py
with arguments like--benchmark_time
,--warm_up_time
,project_id
,database_id
when spanner/bigtable benchmarks are added.@tmatsuo , please have a review at this PR. Thank you!