-
Notifications
You must be signed in to change notification settings - Fork 10
v3.0 How To
- Reporting
-
Load Types
- Write
- Create
- Create the data items with fixed specified size
- Create the data items with random size in the specified range
- Create the data items with random size in the specified range and with biased size distribution
- Update
- Append
- Copy
- Read
- Verification
- Disable Verification
- Delete
- Write
-
Load Job Limit
- Limit by Count
- Limit by Time
- Limit by Rate (Throttling)
- Limit by Size
-
Run Modes
- Standalone Mode
- Distributed Mode
- Load Server
- Load Client
- Storage Mock
- Web GUI
-
Item Types
- Container
- Write the containers
- Read the containers with Data Items
- Delete the containers
- Data
- Container
-
Cloud Storage API
- Amazon S3
- EMC Atmos
- OpenStack Swift
- EMC ECS
- S3
- Atmos
- Swift
-
Filesystem Load
- Write to the custom directory
- Read from the custom directory
- Overwrite the files circularly
-
Custom Content
- Text content
- Zero bytes content
-
Circular Load
- Read
- Update
-
Scenario
- Configure a Load Job
- Make a Precondition Load Job (don't persist the metrics)
- Sequential Load Jobs execution
- Parallel Load Jobs execution
- Reuse the Items for another Load Job
- Inherit the Load Job Container configuration
- Execute a Shell Command
- Start a Non-Blocking Shell Command Execution
- Sleep Between the Jobs
- Mixed Load
- Weighted Load
- Rampup
- Scenario Validation
- Execute a job For Each value from the list
- Execute the infinite jobs loop
- Execute the jobs 10 times
- Execute a job for a specified numbers range
-
Dynamic Configuration Values
- Custom HTTP headers
- Custom HTTP headers with Dynamic Values
- Filesystem Load: Dynamic Target Path
-
Custom Items Naming
- Ascending names order
- Descending names order
- Names with decimal identifiers
- Names with prefixes
- SSL/TLS support
-
Miscellaneous
- Docker integration
- Disable console output coloring
Run report is a set of files Mongoose produces in a directory <MONGOOSE_DIR>/log/<RUN_ID>. Starting with Mongoose 0.8, all the key log files (items.csv, perf.avg.csv, perf.trace.csv, and perf.sum.csv) are produced in pure CSV format. You can use any mature tool that supports CSV format to open and process report components.
As an example, suppose we had a Mongoose run that produced 10 data items of random size and we would like to calculate total size of the generated content. You can easily get the result by opening items.csv in any spreadsheet editor and selecting the third column with data item sizes. The total size can be found on a status bar as a Sum value.
Example scenarios location: scenario/write/*.json
Mongoose creates the items by default (if load type is not specified). So it's enough just to run the default scenario:
java -jar <MONGOOSE_DIR>/mongoose.jar
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=100
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=4KB-16KB
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=0-100MB,2.5 ...
In order to enable the update mode for the Write load type it's neccessary to specify the random byte ranges count.
Example scenarios location: scenario/partial/update-multiple-random-ranges.json
The example below performs the data items update from the specified source file with 10 random byte ranges per request.
java -jar <MONGOOSE_DIR>/mongoose.jar --update --item-data-ranges-random=10 --item-input-file=<PATH_TO_ITEM_LIST_CSV_FILE> ...
In order to enable the append mode for the Write load type it's neccessary to specify the fixed byte range with start offset equal to the size of the data items which should be updated.
Example scenarios location: scenario/partial/append.json
The example below performs the data items append with the appendage size of 8KB.
java -jar <MONGOOSE_DIR>/mongoose.jar --update --item-data-ranges=-8KB --item-input-file=<PATH_TO_ITEM_LIST_CSV_FILE> ...
Example scenarios location: scenario/copy/*.json
The example below performs the items copying from the source container to the target container:
java -jar <MONGOOSE_DIR>/mongoose.jar [--item-output-container=<TARGET_CONTAINER>] --item-input-container=<SOURCE_CONTAINER> [--item-input-file=<PATH_TO_ITEMS_LIST_CSV_FILE>] ...
See Mongoose Copy Mode functional specification for details
In order to use Read load type it's neccessary to set "read" value to the "load.type" configuration parameter.
Example scenarios location: scenario/read/*.json
Example scenarios location: scenario/read/read-verify-updated.json
java --read --item-data-verify --item-output-container=<CONTAINER_WITH_ITEMS> ... -jar <MONGOOSE_DIR>/mongoose.jar
In order to use Read load type it's neccessary to set "--delete" configuration parameter.
Example scenarios location: scenario/delete/*.json
java -jar <MONGOOSE_DIR>/mongoose.jar --delete --item-output-container=<CONTAINER_WITH_ITEMS> ...
Example scenarios location: scenario/limit/*.json
It's possible to limit the load jobs by any combination of 4 possible ways.
Example scenarios location: scenario/limit/by-count.json
Load with no more than N items:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-count=<N> ...
Example scenarios location: scenario/limit/by-time.json Perform a load job for no more than 1 hour:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-time=1h ...
Example scenarios location: scenario/limit/by-rate.json
Perform a load job with the rate of no more than 1234.5 items (and operations) per second.
java -jar <MONGOOSE_DIR>/mongoose.jar [--item-data-size=0] --load-limit-rate=1234.5 [--load-concurrency=1000] ...
Example scenarios location: scenario/limit/by-size.json
Load with data items having the summary size of no more than 100GB:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-size=100GB ...
Mongoose runs in the standalone mode by default
java -jar <MONGOOSE_DIR>/mongoose.jar --run-file=<PATH_TO_SCENARIO_FILE>
Example scenarios location: scenario/distributed/*.json
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-driver-remote [--storage-driver-addrs=A,B,C,D] --run-file=<PATH_TO_SCENARIO_FILE>
java -jar <MONGOOSE_DIR>/mongoose-storage-mock.jar
Not implemented yet
java -jar <MONGOOSE_DIR>/mongoose-gui.jar
In order to perform a load with container items it's neccessary to set "container" value to the "item.type" configuration parameter.
Example scenarios location: scenario/container/*.json
Not implemented yet
Example scenarios location: scenario/container/write-containers.json
java -jar <MONGOOSE_DIR>/mongoose.jar --item-type=container ...
Not Implemented yet
Example scenarios location: scenario/container/read-containers-with-items.json
java -jar <MONGOOSE_DIR>/mongoose.jar --item-type=container --read --item-input-file=<PATH_TO_ITEMS_LIST_CSV_FILE> ...
Note that the total byte count and bytes per second (BW) metrics are calculated while reading the containers with data items. The size of the container is calculated as a sum of the included data items sizes.
Example scenarios location: scenario/container/delete-containers.json
java -jar <MONGOOSE_DIR>/mongoose.jar --item-type=container --delete --item-input-file=<PATH_TO_ITEMS_LIST_CSV_FILE> ...
The "data" item type is used by default.
Example scenarios location: scenario/ecs/write-s3.json Note S3 API is used by default Specifying the container name in the case of S3 API means specifying the bucket to use.
java --storage-auth-id=<USER_ID> --storage-auth-secret=<SECRET> [--item-output-container=<TARGET_BUCKET>] --storage-node-addrs=10.20.30.40 --storage-port=8080 -jar <MONGOOSE_DIR>/mongoose.jar
Example scenarios location: scenario/ecs/write-atmos.json
java --storage-auth-id=<USER_ID> [--storage-auth-token=<SUBTENANT>] --storage-auth-secret=<SECRET> --storage-auth-secret=WQmcQh5UYRAWYqJGCVEueihGBZ7h6nI2vHHwYmPg --storage-node-addrs=10.20.30.40 --storage-port=8080 --storage-http.api=atmos -jar <MONGOOSE_DIR>/mongoose.jar
Note
The default value of "auth.id" configuration parameter (null) doesn't work in the case of Atmos API usage.
Example scenarios location: scenario/ecs/write-swift.json
java --storage-auth-id=<USER_ID> [--storage-auth-token=<TOKEN>] --storage-auth-secret=<SECRET> [--item-output-container=<TARGET_CONTAINER>] --storage-node-addrs=10.20.30.40 --storage-port=8080 --storage-http.api=swift --storage-http.namespace=<NS> -jar <MONGOOSE_DIR>/mongoose.jar
Note
The default value of "storage.http.namespace" configuration parameter (null) doesn't work in the case of Swift API usage.
Example scenarios location: scenario/ecs/*.json
java -jar <MONGOOSE_DIR>/mongoose.jar [email protected] --storage-auth-secret=<SECRET> [--item-output-container=<TARGET_BUCKET>] --storage-node-addrs=10.20.30.40,10.20.30.41,10.20.30.42 --storage-port=9020
java -jar <MONGOOSE_DIR>/mongoose.jar [email protected] [--storage-auth-token=<SUBTENANT>] --storage-auth-secret=<SECRET> --storage-node-addrs=10.20.30.40,10.20.30.41,10.20.30.42 --storage-port=9022
java -jar <MONGOOSE_DIR>/mongoose.jar [email protected] [--storage-auth-token=<TOKEN>] --storage-auth-secret=<SECRET> [--item-output-container=<TARGET_CONTAINER>] --storage-node-addrs=10.20.30.40,10.20.30.41,10.20.30.42 --storage-port=9024 --storage-http=api=swift --storage-http-namespace=s3
In order to use Filesystem load engine it's neccessary to set "fs" value to the "storage.type" configuration parameter.
Example scenarios location: scenario/fs/*.json
Example scenarios location: scenario/fs/write-to-custom-dir.json
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-container=<PATH_TO_TARGET_DIR> --storage-type=fs
Example scenarios location: scenario/fs/read-from-custom-dir.json
java-jar <MONGOOSE_DIR>/mongoose.jar --item-output-container=<PATH_TO_TARGET_DIR> [<ITEM_SRC_FILE_OR_CONTAINER>] --read --storage-type=fs
Example scenarios location: scenario/fs/overwrite-circularly.json
java -jar <MONGOOSE_DIR>/mongoose.jar --update --item-output-container=<PATH_TO_TARGET_DIR> [<ITEM_SRC_FILE_OR_CONTAINER>] --load-circular=true --storage-type=fs
An user may use a custom file as the content source for the data generation and verification. This custom file path should be specified as the "--item-data-content-file" configuration parameter.
Example scenarios location: scenario/content/*.json
Note
The same content source should be used for the data items writing and subsequent reading in order to pass data verification
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-content-file=./textexample ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-content-file=./zerobytes ...
In order to load with the fixed set of the items "infinitely" (each items is being written/read again and again) an user should set the configuration parameter "load.circular" to true.
Example scenarios location: scenario/circular/*.json
java -jar <MONGOOSE_DIR>/mongoose.jar --read --item-output-container=<CONTAINER_WITH_ITEMS> [<ITEM_SRC_FILE_OR_CONTAINER>] --load-circular=true ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-container=<CONTAINER_WITH_ITEMS> [<ITEM_SRC_FILE_OR_CONTAINER>] --item-data-ranges=1 --load-circular=true ...
{
"type" : "load",
"config" : {
// here are the configuration hierarchy
}
}
{
"type" : "precondition",
"config" : {
// here are the configuration hierarchy
}
}
Sequential Load Jobs execution
{
"type" : "sequential",
"jobs" : [
{
"type" : "",
...
}, {
"type" : "",
...
}
...
]
}
{
"type" : "parallel",
"jobs" : [
{
"type" : "",
...
}, {
"type" : "",
...
}
...
]
}
{
"type" : "sequential",
"jobs" : [
{
"type" : "precondition",
"config" : {
"item" : {
"output" : {
"file" : "items.csv"
}
}
...
}
}, {
"type" : "",
"config" : {
"item" : {
"input" : {
"file" : "items.csv"
}
}
...
}
}
]
}
{
"type" : "sequential",
"config" : {
// the configuration specified here will be inherited by the container elements
},
"jobs" : [
{
"type" : "load",
...
}
...
]
}
{
"type" : "command",
"value" : "killall -9 java",
}
{
"type" : "command",
"value" : "find /",
"blocking" : false
}
{
"type" : "sequential",
"config" : {
// shared configuration values inherited by the children jobs
},
"jobs" : [
{
"type" : "load",
"config" : {
// specific configuration for the 1st load job
}
}, {
"type" : "command",
"value" : "sleep 5s"
}, {
"type" : "load",
"config" : {
// specific configuration for the 2nd load job
}
}
]
}
Please refer to example scenarios located at: scenario/mixed/*.json
Please refer to example scenario located at: scenario/weighted/*.json
There are a JSON schema file in the distribution: scenario/scenario-schema.json. An user may automatically validate the scenarios using this schema. This should help to write one's own custom scenario.
{
"type" : "for",
"value" : "concurrency",
"in" : [
1, 10, 100, 1000, 10000, 100000
],
"config" : {
"load" : {
"concurrency" : "${concurrency}"
}
},
"jobs" : [
{
"type" : "load"
}
]
}
{
"type" : "for",
"jobs" : [
{
"type" : "load"
}
]
}
{
"type" : "for",
"value" : 10,
"jobs" : [
{
"type" : "load"
}
]
}
{
"type" : "for",
"value" : "i",
"in" : "2.71828182846-3.1415926,0.1",
"jobs" : [
{
"type" : "command",
"value" : "echo ${i}"
}
]
}
Example scenarios location: scenario/dynamic/*.json
Example scenarios location: scenario/dynamic/custom-http-headers.json
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-http-headers-myOwnHeaderName=MyOwnHeaderValue
Example scenarios location: scenario/dynamic/custom-http-headers-with-dynamic-values.json
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-http-headers-myOwnHeaderName=MyOwnHeaderValue\ %d[0-1000]\ %f{###.##}[-2--1]\ %D{yyyy-MM-dd'T'HH:mm:ssZ}[1970/01/01-2016/01/01]
Example scenarios location: scenario/dynamic/write-to-variable-dir.json
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-container=<PATH_TO_TARGET_DIR>/%p\{16\;2\} --storage-type=fs ...
Example scenarios location: scenario/naming/*.json
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-type=asc ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-type=desc ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-radix=10 ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-prefix=item_ ...
The feature is available since v2.1.0
Example scenarios location: scenario/ssl/*.json
java -jar <MONGOOSE_DIR>/mongoose.jar -run-file=<MONGOOSE_DIR>/scenario/ssl/write-single-item.json
or
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-ssl --storage-port=9021 ...
Please refer to Mongoose Usage/Docker page for reference
Go to the file conf/logging.json using the text editor, then go to the line ~#45 in the attribute "pattern" value remove the leading "%highlight{" and trailing "}" characters
- Overview
- Deployment
- User Guide
- Troubleshooting
- Reference