-
Notifications
You must be signed in to change notification settings - Fork 10
v4.0 User Guide
-
Configuration
1.1. Configuration Syntax
1.2. Aliasing
1.3. Parameterized Configuration
1.3.1. Parameterized HTTP headers
1.3.2. Parameterized Output Path
1.3.3. Multiuser Load
-
Items
2.1. Item Types
2.1.1. Data Items
2.1.1.1. Fixed Size Data Items
2.1.1.1.1. Empty Data Items
2.1.1.2. Random Size Data Items
2.1.1.2.1. Biased Random Size Data Items
2.1.2. Path Items
2.1.3. Token Items
2.2. Items Input
2.2.1. Items Input File
2.2.2. Items Path Listing Input
2.2.3. New Items Input
2.2.3.1. Random Item Ids
2.2.3.2. Ascending Item Ids
2.2.3.3. Descending Item Ids
2.2.3.4. Items Id Prefix
2.2.3.5. Items Id Radix
2.2.3.6. Items Id Offset
2.2.3.7. Items Id Length
2.3. Items Output
2.3.1. Items Output Delay
2.3.2. Items Output File
2.3.3. Items Output Path
2.3.3.1. Constant Items Output Path
2.3.3.2. Pattern Items Output Path
-
Content
3.1. Uniform Random Data Payload
3.2. Payload From the External File
-
Concurrency
4.1. Limited Concurrency
4.2. Unlimited Concurrency
-
Recycle Mode
-
Test Steps
6.1. Test Steps Identification
6.2. Test Steps Limitation
6.2.1. Steps Are Infinite by Default
6.2.2. Limit Step by Processed Item Count
6.2.3. Limit Step by Rate
6.2.4. Limit Step by Processed Data Size
6.2.5. Limit Step by Time
6.2.6. Limit Step by End of Items Input
-
Output
7.1. Console Coloring
7.2. Metrics Output
7.2.1. Average Metrics Output
7.2.1.1. Average Metrics Output Period
7.2.1.2. Average Metrics Output Persistence
7.2.1.3. Average Metrics Table Header Output Period
7.2.2. Summary Metrics Output
7.2.3. Trace Metrics Output
7.2.4. Metrics Accounting Threshold
-
Load Types
8.1. Noop
8.2. Create
8.2.1. Create New Items
8.2.2. Copy Mode
8.3. Read
8.3.1. Read With Disabled Validation
8.3.2. Read With Enabled Validation
8.3.3. Partial Read
8.3.3.1. Random Byte Ranges Read
8.3.3.1.1. Single Random Byte Range Read
8.3.3.1.2. Multiple Random Byte Ranges Read
8.3.3.2. Fixed Byte Ranges Read
8.3.3.2.1. Read From offset of N bytes to the end
8.3.3.2.2. Read Last N bytes
8.3.3.2.3. Read Bytes from N1 to N2
8.3.3.2.4. Read Multiple Fixed Ranges
8.4. Update
8.4.1. Update by Overwrite
8.4.2. Random Ranges Update
8.4.2.1. Single Random Range Update
8.4.2.2. Multiple Random Ranges Update
8.4.3. Fixed Ranges Update
8.4.3.1. Overwrite from the offset of N bytes to the end
8.4.3.2. Overwrite Last N bytes
8.4.3.3. Overwrite Bytes from N1 to N2
8.4.3.4. Append
8.4.3.5. Multiple Fixed Ranges Update
8.5. Delete
-
Scenarios
9.1. Scenarios DSL
9.2. Default Scenario
9.3. Custom Scenario File
9.4. Scenario Step Configuration
9.4.1. Override the Default Configuration in the Scenario
9.4.2. Step Configuration Reusing
9.4.3. Reusing The Items in the Scenario
9.4.4. Environment Values Substitution in the Scenario
9.5. Scenario Step Types
9.5.1. Shell Command
9.5.1.1. Blocking Shell Command
9.5.1.2. Non-blocking Shell Command
9.5.2. Load Step
9.5.3. Precondition Load Step
9.5.4. Weighted Load Step
9.5.5. Chain Load Step
-
Storage Driver
10.1. Distributed Storage Drivers
10.1.1. Single Local Separate Storage Driver Service
10.1.2. Many Local Separate Storage Driver Services (at different ports)
10.1.3. Single Remote Storage Driver Service
10.1.4. Many Remote Storage Driver Services
10.2. Configure the Storage
10.2.1. Create Auth Token On Demand
10.2.2. Create Destination Path On Demand
10.3. Filesystem Storage Driver
10.4. Network Storage Driver
10.4.1. Node Balancing
10.4.2. SSL/TLS
10.4.3. Connection Timeout
10.4.4. I/O Buffer Size Adjustment for Optimal Performance
10.4.5. HTTP Storage Driver
10.4.5.2. Atmos
10.4.5.3. S3
10.4.5.3.1. EMC S3 extensions
10.4.5.4. Swift
See the Configuration documentation for syntax details.
See the Configuration Aliasing documentation for details.
See the Configuration Parametrization documentation for details.
See Variable HTTP Headers Values for details.
See Variable Items Output Path for details.
See Multiuser Load for details.
A storage may be loaded using Items and some kind of operation (CRUD). The only thing which item has is a mutable name.
Mongoose supports different item types:
- A data (object, file) item
- A path (directory, bucket, container) item
- A token item
The data items type is used by default.
Fixed data item size is used by default. The default size value is 1MB.
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=10KB
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=0
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=5MB-15MB
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=0-100MB,0.2
Note:
- The bias value is appended to the range after the comma (0.2 in the example above).
- The generated value is biased towards the high end if bias value is less than 1.
- The generated value is biased towards the low end if bias value is more than 1.
The path items type may be useful to work with directories/buckets/containers (depending on the storage driver type used)
java -jar <MONGOOSE_DIR>/mongoose.jar --item-type=path
java -jar <MONGOOSE_DIR>/mongoose.jar --item-type=token
Items input is a source of the items which should be used to perform the operations (crete/read/etc). The items input may be a file or a path which should be listed.
java -jar <MONGOOSE_DIR>/mongoose.jar --item-input-file=<PATH_TO_ITEMS_FILE> ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-input-path=/bucket1 ...
New items input is used automatically if no other items input is configured. Useful to create a new random data on the storage.
Random item ids are used by default. The collision probability is highly negligible (2-63-1).
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-type=asc ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-type=desc ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-prefix=item_ ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-radix=10 ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-offset=12345 ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-length=13 ...
The processed items info may be output with a specified delay. This may be useful to test a storage replication using the "chain" step (see the scenario step types for details). The configured delay is in seconds.
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-delay=60
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-file=items.csv
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-path=/bucketOrContainerOrDir
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-path=/mnt/storage/%p\{16\;2\} ...
While creating/verifying/updating the data items Mongoose is able to use different data input types. By default it uses the memory buffer filled with random data. Also Mongoose is able to fill this data input buffer with a data from any external file.
The uniform random data payload is used by default. It uses the configurable seed number to pre-generate some amount (4MB) of the random uniform data. To use the custom seed use the following option:
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-content-seed=5eed42b1gb00b5
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-content-file=<PATH_TO_CONTENT_FILE>
The concurrency metric has different meaning for different storage driver types:
-
File Storage Driver
A count simultaneously open files being written/read/etc.
-
Netty-based Storage Driver and its derivatives
A count of simultaneous active connections (channels).
Note:
System's max open files limit may be required to increased to use high concurrency levels:
ulimit -n 1048576
The default concurrency limit is 1. Mongoose is able to use a custom concurrency limit:
Example:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-concurrency=1000000
The concurrency limit may be disabled (by setting its value to 0)
Example:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-concurrency=0
Note:
It may be useful to limit the rate to measure the actual concurrency while it's not limited
Recycle mode forces the step to recycle the I/O tasks executing them again and again. It may be useful to perform read/update/append/overwrite the objects/files multiple times each.
Note:
The recycle feature is applicable to read and update load types only.
Example:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-generator-recycle-enabled
For details see the Recycle Mode specification.
Test step is an unit of metrics reporting and test execution control.
For each test step:
- total metrics are calculated and reported
- limits are configured and controlled
By default Mongoose generates the test step id for each new test step. The step id is used as the output log files parent directory name. It may be useful to override the default step name with a descriptive one.
java -jar <MONGOOSE_DIR>/mongoose.jar --test-step-id=myTest1
A test step tries to execute eternally if its item input is infinite and no other limits are configured.
To make a test step to process (CRUD) no more than 1000 items, for example:
java -jar <MONGOOSE_DIR>/mongoose.jar --test-step-limit-count=1000
It may be useful to limit the rate by a max number of operations per second. The rate limit value may be a real number, for example 0.01 (op/s).
java -jar <MONGOOSE_DIR>/mongoose.jar --load-rate-limit=1234.5
java -jar <MONGOOSE_DIR>/mongoose.jar --test-step-limit-size=123GB
java -jar <MONGOOSE_DIR>/mongoose.jar --test-step-limit-time=15m
Any test step configured with the valid items input should finish (at most) when all the items got from the input are processed (copied/read/updated/deleted). This is true only if test step is not configured to recycle the I/O tasks again and again (recycle mode is disabled).
java -jar <MONGOOSE_DIR>/mongoose.jar --item-input-[file|path]=<INPUT_FILE_OR_PATH> ...
In the example above, the test step will finish when all items from the specified items file are processed.
By default, the standard output contains the color codes for better readability. To disable the standard output color codes use the following option:
java -jar <MONGOOSE_DIR>/mongoose.jar --output-color=false
The default time interval between the metric outputs is 10s. This value may be changed.
java -jar <MONGOOSE_DIR>/mongoose.jar --output-metrics-average-period=1m
By default each load step outputs the current metrics periodically to the console (as a table record) and into the log file. To disable the average metrics file output use the following option:
java -jar <MONGOOSE_DIR>/mongoose.jar --output-metrics-average-persist=false
By default the table header is displayed every 20 records. To change this number, use the following option:
java -jar <MONGOOSE_DIR>/mongoose.jar --output-metrics-average-table-header-period=50
By default each load step outputs the summary metrics at its end to the console and into the log file. To disable the summary metrics file output use the following option:
java -jar <MONGOOSE_DIR>/mongoose.jar --output-metrics-summary-persist=false
There's an ability to log the info about every I/O operation been executed versus a storage. This kind of info is called "I/O trace". To output the I/O trace records into the log file, specify the following option:
java -jar <MONGOOSE_DIR>/mongoose.jar --output-metrics-trace-persist
java -jar <MONGOOSE_DIR>/mongoose.jar --test-step-metrics-threshold=0.95
The "dry run" operation type. Does everything except actual storage I/O. May be useful to measure the Mongoose's internal performance.
java -jar <MONGOOSE_DIR>/mongoose.jar --noop
Create load type is used by default. The behavior may differ on the other configuration parameters.
"Create" performs writing new items to a storage by default.
java -jar <MONGOOSE_DIR>/mongoose.jar --item-input-[file|path]=<INPUT_FILE_OR_PATH> --item-output-path=/bucketOrDir
Read operations don't perform a content validation by default.
java -jar <MONGOOSE_DIR>/mongoose.jar --read ...
java -jar <MONGOOSE_DIR>/mongoose.jar --read --item-data-verify ...
java -jar mongoose.jar \
--read \
--item-data-ranges-random=1 \
--item-input-file=items.csv \
...
java -jar mongoose.jar \
--read \
--item-data-ranges-random=5 \
--item-input-file=items.csv \
...
Example: read the data items partially (from offset of 2KB to the end):
java -jar mongoose.jar \
--read \
--item-data-ranges-fixed=2KB- \
--item-input-file=items.csv \
...
Example: read the last 1234 bytes of the data items:
java -jar mongoose.jar \
--read \
--item-data-ranges-fixed=-1234 \
--item-input-file=items.csv \
...
Example: partially read the data items each in the range from 2KB to 5KB:
java -jar mongoose.jar \
--read \
--item-data-ranges-fixed=2KB-5KB \
--item-input-file=items.csv \
...
java -jar mongoose.jar \
--read \
--item-data-ranges-fixed=0-1KB,2KB-5KB,8KB- \
--item-input-file=items.csv \
...
To overwrite the data items it's necessary to skip the byte ranges configuration for the "update" load type. It may be also useful to specify the different content source to overwrite with different data:
java -jar mongoose.jar \
--update \
--item-data-content-file=custom/content/source/file.data \
--item-input-file=items2overwrite.csv \
--item-output-file=items_overwritten.csv \
...
If there's file with custom content source available it's possible to use also custom content generation seed (hex):
java -jar mongoose.jar \
--update \
--item-data-content-seed=5eed42b1gb00b5 \
--item-input-file=items2overwrite.csv \
--item-output-file=items_overwritten.csv \
...
java -jar mongoose.jar \
--update \
--item-data-ranges-random=1 \
--item-input-file=items2update.csv \
--item-output-file=items_updated.csv \
...
Random ranges update example:
java -jar mongoose.jar \
--update \
--item-data-ranges-random=5 \
--item-input-file=items2update.csv \
--item-output-file=items_updated.csv \
...
java -jar mongoose.jar \
--update \
--item-data-ranges-fixed=2KB- \
--item-input-file=items2overwrite_tail2KBs.csv \
--item-output-file=items_with_overwritten_tails.csv \
...
Example: overwrite the last 1234 bytes of the data items:
java -jar mongoose.jar \
--update \
--item-data-ranges-fixed=-1234 \
--item-input-file=items2overwrite_tail2KBs.csv \
--item-output-file=items_with_overwritten_tails.csv \
...
Example: overwrite the data items in the range from 2KB to 5KB:
java -jar mongoose.jar \
--update \
--item-data-ranges-fixed=2KB-5KB \
--item-input-file=items2overwrite_range.csv \
--item-output-file=items_overwritten_in_the_middle.csv \
...
Example: append 16KB to the data items:
java -jar mongoose.jar \
--update \
--item-data-ranges-fixed=-16KB- \
--item-input-file=items2append_16KB_tails.csv \
--item-output-file=items_appended.csv \
...
java -jar mongoose.jar \
--update \
--item-data-ranges-fixed=0-1KB,2KB-5KB,8KB- \
--item-input-file=items2update.csv \
--item-output-file=items_updated.csv \
...
java -jar mongoose.jar \
--delete \
--item-input-file=items2delete.csv \
...
See the Scenarios Reference for details.
See the Scenarios DSL Reference for details.
Mongoose can not run without a scenario. So it uses the default scenario
implicitly if the scenario file to run is not specified obviously. The
file containing the default scenario is located at scenario/default.json
.
The default scenario contents:
Load.run();
java -jar <MONGOOSE_DIR>/mongoose.jar \
--test-scenario-file=<PATH_TO_SCENARIO_FILE>
The configuration values from the step's configuration override the default configuration and the CLI options:
var loadStepConfig = {
"test": {
"step": {
"limit": {
"time": "1m"
}
}
}
};
Load
.config(loadStepConfig)
.run();
In the case above doesn't matter which test-step-id
CLI option is
specified, the value "step_0" will override this.
The configuration values from the step's configuration are inherited by all child steps (and possibly overridden).
var loadStepConfig1 = {
"test": {
"step": {
"limit": {
"time": "1m"
}
}
}
};
var loadStepConfig2 = {
"test": {
"step": {
"limit": {
"count": 100000
}
}
}
};
var loadStep1 = Load.config(loadStepConfig1);
var loadStep2 = loadStep1.config(loadStepConfig2);
loadStep1.run();
loadStep2.run();
var preconditionLoadStepConfig = {
"item" : {
"output" : {
"file" : "items.csv"
}
}
...
};
var loadStepConfig = {
"item" : {
"input" : {
"file" : "items.csv"
}
}
...
};
Load
.config(preconditionLoadStepConfig)
.run();
Load
.config(loadStepConfig)
.run();
To run the scenario below please define ITEM_INPUT_FILE
either
ITEM_INPUT_PATH
environment variable and the ITEM_OUTPUT_PATH
environment variable
var CopyLoadUsingEnvVars = CreateLoad
.config(
{
"item": {
"input": {
"file": ITEM_INPUT_FILE,
"path": ITEM_INPUT_PATH
},
"output": {
"path": ITEM_OUTPUT_PATH
}
}
}
);
CopyLoadUsingEnvVars.run();
Sleep between the steps for example:
Command
.value("echo Hello world!")
.run();
Command
.value("ps alx | grep java")
.run();
var command1 = Command.value("echo Hello world!");
var command2 = Command.value("ps alx | grep java");
Parallel
.step(command1)
.step(command2)
.run();
See the Load Step documentation for a details.
Executes the child steps in parallel
var loadStep1 = Load.config(...);
var loadStep2 = Load.config(...);
Parallel
.step(loadStep1)
.step(loadStep2)
.run();
See the Parallel Step documentation for a details.
For details see Weighted Load Reference.
For details see Chain Load Reference.
10. Storage Driver
Mongoose is able to work in the so called distributed mode what allows to scale out the load performed on a storage. In the distributed mode there's a instance controlling the distributed load execution progress. This instance usually called "controller" and usually should be running on a dedicated host. The controller aggregates the results from the remote (usually) storage driver services which perform the actual load on the storage.
- Start the storage driver service:
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar
- Start the controller:
java -jar <MONGOOSE_DIR>/mongoose.jar \
--storage-driver-remote \
...
- Start the 1st storage driver service:
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar \
--storage-driver-port=1099
- Start the 1st storage driver service:
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar \
--storage-driver-port=1100
- Start the controller:
java -jar <MONGOOSE_DIR>/mongoose.jar \
--storage-driver-remote \
--storage-driver-addrs=127.0.0.1:1099,127.0.0.1:1100 \
...
- Start the storage driver service on one host:
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar
- Start the controller on another host:
java -jar <MONGOOSE_DIR>/mongoose.jar \
--storage-driver-remote \
--storage-driver-addrs=<DRIVER_IP_ADDR> \
...
- Start the storage driver service on each host using the following command:
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar
- Start the controller on another host:
java -jar <MONGOOSE_DIR>/mongoose.jar \
--storage-driver-remote \
--storage-driver-addrs=<DRIVER1>,<DRIVER2>,... \
...
Users would like to not to care if some configuration parameters are not specified explicitly or a target storage is not fully prepared for the test. For example missing bucket (S3), subtenant (Atmos), target directory, etc. Mongoose will try to configure/create such things automatically on demand and cache them for further reuse by other I/O tasks.
Note:
Mongoose test step creates a kind of knowledge about the storage which may become irrelevant. For example, Mongoose creates/checks the target bucket once and remembers the result. If the bucket is deleted by 3rd side during the Mongoose test step it will continue to consider the bucket existing despite the arising failures.
If no authentication token is specified/exists Mongoose tries to create it. This functionality is currently implemented for Atmos and Swift storage drivers.
If no output path is specified/exists Mongoose tries to create it (create destination directory/bucket/container). This functionality is currently implemented for filesystem, S3 and Swift storage drivers.
Please refer to the storage driver's readme
Mongoose uses the round-robin way to distribute I/O tasks if multiple storage endpoints are used. If a connection fail Mongoose will try to distribute the active connections equally among the endpoints.
java -jar <MONGOOSE_DIR>/mongoose.jar \
--storage-net-ssl \
--storage-net-node-port=9021 \
...
Sometimes the test is run against the storage via network and the storage endpoint may fail to react on a connection. Mongoose should fail such I/O task and continue to go on. There's an ability to set a response timeout which allows to interrupt the I/O task and continue to work.
java -jar <MONGOOSE_DIR>/mongoose.jar \
--storage-net-timeoutMillisec=100000 \
...
Mongoose automatically adopts the input and output buffer sizes depending on the step info. For example, for create I/O type the input buffer size is set to the minimal value (4KB) and the output buffer size is set to configured data item size (if any). If read I/O type is used the behavior is right opposite - specific input buffer size and minimal output buffer size. This improves the I/O performance significantly. But users may set the buffer sizes manually.
Example: setting the input buffer to 100KB:
java -jar <MONGOOSE_DIR>/mongoose.jar \
--storage-net-rcvBuf=100KB \
...
Example: setting the output buffer to 10MB:
java -jar <MONGOOSE_DIR>/mongoose.jar \
--storage-net-sndBuf=10MB \
...
Please refer to the storage driver's readme
Please refer to the storage driver's readme
Please refer to the storage driver's readme
Please refer to the storage driver's readme
- Overview
- Deployment
- User Guide
- Troubleshooting
- Reference