-
Notifications
You must be signed in to change notification settings - Fork 10
v3.2 User Guide
-
Configuration
1.1. Configuration Syntax
1.2. CLI Arguments Aliasing
1.3. Configuration Pattern Values
-
Items
2.1. Item Types
2.1.1. Data Items
2.1.1.1. Fixed Size Data Items
2.1.1.1.1. Empty Data Items
2.1.1.1.2. Small Data Items (1B-100KB)
2.1.1.1.3. Intermediate Size Data Items (100KB-10MB)
2.1.1.1.4. Big Data Items (10MB-100MB)
2.1.1.1.5. Very Big Data Items (100MB-10GB)
2.1.1.1.6. Huge Data Items (10GB-1TB)
2.1.1.1.7. Ultimate Data Items (>=1TB)
2.1.1.2. Random Size Data Items
2.1.1.3. Biased Random Size Data Items
2.1.2. Path Items
2.1.3. Token Items
2.2. Items Input
2.2.1. Items Input File
2.2.2. Items Path Listing Input
2.2.3. New Items Input
2.2.3.1. Random Item Ids
2.2.3.2. Ascending Item Ids
2.2.3.3. Descending Item Ids
2.2.3.4. Items Id Prefix
2.2.3.5. Items Id Radix
2.2.3.6. Items Id Offset
2.2.3.7. Items Id Length
2.3. Items Output
2.3.1. Items Output Delay
2.3.2. Items Output File
2.3.3. Items Destination Path
2.3.3.1. Constant Items Destination Path
2.3.3.2. Pattern Items Destination Path
-
Content
3.1. Uniform Random Data Payload
3.2. Payload From the External File
-
Concurrency
4.1. Default Concurrency Level (1)
4.2. Small Concurrency Level (2-10)
4.3. Medium Concurrency Level (11-100)
4.4. High Concurrency Level (101-1K)
4.5. Very High Concurrency Level (1K-10K)
4.6. Huge Concurrency Level (10K-100K)
4.7. Ultimate Concurrency Level (100K-1M)
-
Circularity
-
Load Jobs
6.1. Load Jobs Naming
6.2. Load Jobs Limitation
6.2.1. Load Jobs Are Infinite by Default
6.2.2. Limit Load Job by Processed Item Count
6.2.3. Limit Load Job by Rate
6.2.4. Limit Load Job by Processed Data Size
6.2.5. Limit Load Job by Time
6.2.6. Limit Load Job by End of Items Input
-
Metrics Reporting
7.1. Metrics Periodic Reporting
7.2. Metrics Reporting is Suppressed for the Precondition Jobs
7.3. Metrics Reporting Triggered by Load Threshold
7.4. I/O Traces Reporting
-
Load Types
8.1. Noop
8.2. Create
8.2.1. Create New Items
8.2.2. Copy Mode
8.3. Read
8.3.1. Read With Disabled Validation
8.3.2. Read With Enabled Validation
8.3.3. Partial Read
8.3.3.1. Random Byte Ranges Read
8.3.3.1.1. Single Random Byte Range Read
8.3.3.1.2. Multiple Random Byte Ranges Read
8.3.3.2. Fixed Byte Ranges Read
8.3.3.2.1. Read From offset of N bytes to the end
8.3.3.2.2. Read Last N bytes
8.3.3.2.3. Read Bytes from N1 to N2
8.3.3.2.4. Read Multiple Fixed Ranges
8.4. Update
8.4.1. Update by Overwrite
8.4.2. Random Ranges Update
8.4.2.1. Single Random Range Update
8.4.2.2. Multiple Random Ranges Update
8.4.3. Fixed Ranges Update
8.4.3.1. Overwrite from the offset of N bytes to the end
8.4.3.2. Overwrite Last N bytes
8.4.3.3. Overwrite Bytes from N1 to N2
8.4.3.4. Append
8.4.3.5. Multiple Fixed Ranges Update
8.5. Delete
-
Scenarios
9.1. Scenarios Syntax
9.2. Default Scenario
9.3. Custom Scenario File
9.4. Job Configuration in the Scenario
9.4.1. Override Default Configuration in the Scenario
9.4.2. Job Configuration Inheritance
9.4.3. Reusing The Items in the Scenario
9.4.4. Environment Values Substitution in the Scenario
9.5. Scenario Job Types
9.5.1. Shell Command Job
9.5.1.1. Blocking Shell Command Job
9.5.1.2. Non-blocking Shell Command Job
9.5.2. Load Job
9.5.3. Precondition Load Job
9.5.4. Parallel Job
9.5.5. Sequential Job
9.5.6. Loop Job
9.5.6.1. Loop by Count
9.5.6.2. Loop by Range
9.5.6.3. Loop by Sequence
9.5.6.4. Infinite Loop
9.5.7. Mixed Load Job
9.5.7.1. Separate Configuration in the Mixed Load Job
9.5.7.2. Weighted Load Job
9.5.8. Chain Load Job
9.5.8.1. Separate Configuration in the Chain Load Job
9.5.8.2. Delay Between Operations in the Chain Load Job
-
Storage Driver
10.1. Distributed Storage Drivers
10.1.1. Single Local Separate Storage Driver Service
10.1.2. Many Local Separate Storage Driver Services (at different ports)
10.1.3. Single Remote Storage Driver Service
10.1.4. Many Remote Storage Driver Services
10.1.5. Large Count of Remote Storage Driver Services
10.2. Preparing the Storage
10.2.1. Auth Token Precondition Hook
10.2.2. Destination Path Precondition Hook
10.3. Filesystem Storage Driver
10.4. Network Storage Driver
10.4.1. Node Balancing
10.4.2. SSL/TLS
10.4.3. Connection Timeout
10.4.4. I/O Buffer Size Adjustment for Optimal Performance
10.4.5. HTTP Storage Driver
10.4.5.2. Atmos
10.4.5.2.1. Authentication
10.4.5.2.2. Filesystem access
10.4.5.3. S3
10.4.5.3.1. Authentication
10.4.5.3.2. Filesystem access
10.4.5.3.3. Versioning
10.4.5.3.4. Multipart Upload
10.4.5.4. Swift
10.4.5.4.1. Authentication
10.4.5.4.2. Versioning
10.4.5.4.3. Create Dynamic Large Objects
TODO
TODO
Dynamic HTTP headers with generated values:
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-net-http-headers=myOwnHeaderName:MyOwnHeaderValue\ %d[0-1000]\ %f{###.##}[-2--1]\ %D{yyyy-MM-dd'T'HH:mm:ssZ}[1970/01/01-2016/01/01]
Or "variable" files output path:
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-path=/mnt/storage/%p\{16\;2\} --storage-type=fs ...
A storage may be loaded using Items and some kind of operation (CRUD). The only thing which item has is a mutable name.
Mongoose supports different item types:
- A data (object, file) item
- A path (directory, bucket, container) item
- A token item
The data items type is used by default.
Fixed data item size is used by default.
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=0
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=1
The data items size of 1MB is used by default. Custom size example:
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=100KB
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=10MB
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=1GB
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=1TB
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=1PB
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=5MB-15MB
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=0-100MB,0.2
java -jar <MONGOOSE_DIR>/mongoose.jar --item-type=path
java -jar <MONGOOSE_DIR>/mongoose.jar --item-type=token
Items input is a source of the items which should be used to perform the operations (crete/read/etc). The items input may be a file or a path which should be listed.
java -jar <MONGOOSE_DIR>/mongoose.jar --item-input-file=<PATH_TO_ITEMS_FILE> ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-input-path=/bucket1 ...
Random item ids are used by default. The collision probability is highly negligible (2-63-1).
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-type=asc ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-type=desc ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-prefix=item_ ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-radix=10 ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-offset=12345 ...
java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-length=13 ...
The processed items info may be output with a specified delay. This may be useful to test a storage replication using the "chain" job (see the scenario job types for details). The configured delay is in seconds.
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-delay=60
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-file=items.csv
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-path=/bucketOrContainerOrDir
java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-path=/mnt/storage/%p\{16\;2\} ...
While creating/verifying/updating the data items Mongoose is able to use different content sources. By default it uses the memory buffer filled with random data. Also Mongoose is able to fill this content source buffer with a data from any external file.
The uniform random data payload is used by default.
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-content-file=<PATH_TO_CONTENT_FILE>
The concurrency equal to 1 is used by default.
java -jar <MONGOOSE_DIR>/mongoose.jar --load-concurrency=10
java -jar <MONGOOSE_DIR>/mongoose.jar --load-concurrency=100
java -jar <MONGOOSE_DIR>/mongoose.jar --load-concurrency=1000
Note:
System's max open files limit may be required to increased to use high concurrency levels:
ulimit -n 10000
Example:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-concurrency=10000
Note:
System's max open files limit may be required to increased to use high concurrency levels:
ulimit -n 100000
Example:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-concurrency=100000
Note:
System's max open files limit may be required to increased to use high concurrency levels:
ulimit -n 1048576
Example:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-concurrency=1000000
"Circularity" forces the load job to recycle the I/O tasks executing them again and again. It may be useful to perform read/update/append the objects/files multiple times each.
Note:
The circularity feature is applicable to read and update load types only.
Example:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-circular
Load job is an unit of metrics reporting and test execution control.
For each load job:
- total metrics are calculated and reported
- limits are configured and controlled
A load job may be considered as one step of the test.
By default Mongoose generates the load job name containing the timestamp. The load job name is used as the output log files parent directory name. It may be useful to override the default load job name with a descriptive one.
java -jar <MONGOOSE_DIR>/mongoose.jar --load-job-name=myTest1
A load job tries to execute eternally if its item input is infinite and no limits are configured.
To make a load job to process (CRUD) no more than 1000 items, for example:
java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-count=1000
It may be useful to limit the load job's rate by a max number of operations per second. The rate limit value may be a real number, for example 0.01 (op/s).
java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-rate=1234.5
java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-size=123GB
java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-time=15m
Any load job configured with the valid items input should finish (at most) when all the items got from the input are processed (copied/read/updated/deleted). This is true only if load job is not configured to recycle the I/O tasks again and again (circularity feature is disabled).
java -jar <MONGOOSE_DIR>/mongoose.jar --item-input-[file|path]=<INPUT_FILE_OR_PATH> ...
In the example above, the load job will finish when all items from the specified items file are processed.
The default time interval between the metric outputs is 10s. This value may be changed.
java -jar <MONGOOSE_DIR>/mongoose.jar --load-metrics-period=1m
As far as a load job is like a test step, there may be precondition jobs which doesn't produce the performance results, but execute some necessary work prior to the test execution.
java -jar <MONGOOSE_DIR>/mongoose.jar --load-metrics-precondition
java -jar <MONGOOSE_DIR>/mongoose.jar --load-metrics-threshold=0.95
There's an ability to log the info about every I/O operation been executed versus a storage. This
kind of info is called "I/O trace". Edit the config/logging.json
json configuration file,
find the following section near the line # ~324:
{
"name": "ioTraceFile",
"type": "loadJobFile",
"fileName": "io.trace.csv",
"PatternLayout": {
"header": "StorageNode,ItemPath,IoTypeCode,StatusCode,ReqTimeStart[us],Duration[us],RespLatency[us],DataLatency[us],TransferSize\n",
"pattern": "%m"
},
"Filters": {
"Filter": [
{
"type": "MarkerFilter",
"marker": "ioTrace",
"onMatch": "NEUTRAL",
"onMismatch": "DENY"
},
{
"type": "ThresholdFilter",
"level": "INFO",
"onMatch": "ACCEPT",
"onMismatch": "DENY"
}
]
}
}
Find out the "ThresholdFilter"
sub section and change the level value on the next line from "INFO" to "DEBUG".
The "dry run" operation type. Does everything except actual storage I/O. May be useful to measure the Mongoose's internal performance.
java -jar <MONGOOSE_DIR>/mongoose.jar --noop
Create load type is used by default. The behavior may differ on the other configuration parameters.
"Create" performs writing new items to a storage by default.
java -jar <MONGOOSE_DIR>/mongoose.jar --item-input-[file|path]=<INPUT_FILE_OR_PATH> --item-output-path=/bucketOrDir
Read load jobs doesn't perform a content validation by default.
java -jar <MONGOOSE_DIR>/mongoose.jar --read ...
java -jar <MONGOOSE_DIR>/mongoose.jar --read --item-data-verify ...
java -jar mongoose.jar
--read
--item-data-ranges-random=1
--item-input-file=items.csv
...
java -jar mongoose.jar
--read
--item-data-ranges-random=5
--item-input-file=items.csv
...
Example: read the data items partially (from offset of 2KB to the end):
java -jar mongoose.jar
--read
--item-data-ranges-fixed=2KB-
--item-input-file=items.csv
...
Example: read the last 1234 bytes of the data items:
java -jar mongoose.jar
--read
--item-data-ranges-fixed=-1234
--item-input-file=items.csv
...
Example: partially read the data items each in the range from 2KB to 5KB:
java -jar mongoose.jar
--read
--item-data-ranges-fixed=2KB-5KB
--item-input-file=items.csv
...
java -jar mongoose.jar
--read
--item-data-ranges-fixed=0-1KB,2KB-5KB,8KB-
--item-input-file=items.csv
...
TODO
java -jar mongoose.jar
--update
--item-data-ranges-random=1
--item-input-file=items2update.csv
--item-output-file=items_updated.csv
...
Random ranges update example:
java -jar mongoose.jar
--update
--item-data-ranges-random=5
--item-input-file=items2update.csv
--item-output-file=items_updated.csv
...
java -jar mongoose.jar
--update
--item-data-ranges-fixed=2KB-
--item-input-file=items2overwrite_tail2KBs.csv
--item-output-file=items_with_overwritten_tails.csv
...
Example: overwrite the last 1234 bytes of the data items:
java -jar mongoose.jar
--update
--item-data-ranges-fixed=-1234
--item-input-file=items2overwrite_tail2KBs.csv
--item-output-file=items_with_overwritten_tails.csv
...
Example: overwrite the data items in the range from 2KB to 5KB:
java -jar mongoose.jar
--update
--item-data-ranges-fixed=2KB-5KB
--item-input-file=items2overwrite_range.csv
--item-output-file=items_overwritten_in_the_middle.csv
...
Example: append 16KB to the data items:
java -jar mongoose.jar
--update
--item-data-ranges-fixed=-16KB-
--item-input-file=items2append_16KB_tails.csv
--item-output-file=items_appended.csv
...
java -jar mongoose.jar
--update
--item-data-ranges-fixed=0-1KB,2KB-5KB,8KB-
--item-input-file=items2update.csv
--item-output-file=items_updated.csv
...
There are a JSON schema file in the distribution: scenario/scenario-schema.json
.
An user may automatically validate the scenarios using this schema.
This should help to write one's own custom scenario correctly.
By default Mongoose tries to execute the so called default scenario file
(located at <MONGOOSE_DIR>scenario/default.json
). This scenario contains onyl a single load
job.
java -jar <MONGOOSE_DIR>/mongoose.jar --scenario-file=myCustomScenario.json
{
"type" : "load",
"config" : {
// here are the configuration hierarchy
}
}
{
"type" : "sequential",
"config" : {
// the configuration specified here will be inherited by the container elements
},
"jobs" : [
{
"type" : "load",
...
}
...
]
}
{
"type" : "sequential",
"jobs" : [
{
"type" : "precondition",
"config" : {
"item" : {
"output" : {
"file" : "items.csv"
}
}
...
}
}, {
"type" : "",
"config" : {
"item" : {
"input" : {
"file" : "items.csv"
}
}
...
}
}
]
}
TODO
Sleep between the jobs for example:
{
"type" : "sequential",
"config" : {
// shared configuration values inherited by the children jobs
},
"jobs" : [
{
"type" : "load",
"config" : {
// specific configuration for the 1st load job
}
}, {
"type" : "command",
"value" : "sleep 5s"
}, {
"type" : "load",
"config" : {
// specific configuration for the 2nd load job
}
}
]
}
{
"type" : "command",
"value" : "find /",
"blocking" : false
}
TODO
{
"type" : "precondition",
"config" : {
// here are the configuration hierarchy
}
}
{
"type" : "parallel",
"jobs" : [
{
"type" : "",
...
}, {
"type" : "",
...
}
...
]
}
{
"type" : "sequential",
"jobs" : [
{
"type" : "",
...
}, {
"type" : "",
...
}
...
]
}
{
"type" : "for",
"value" : 10,
"jobs" : [
{
"type" : "load"
}
]
}
{
"type" : "for",
"value" : "i",
"in" : "2.71828182846-3.1415926,0.1",
"jobs" : [
{
"type" : "command",
"value" : "echo ${i}"
}
]
}
{
"type" : "for",
"value" : "concurrency",
"in" : [
1, 10, 100, 1000, 10000, 100000
],
"config" : {
"load" : {
"concurrency" : "${concurrency}"
}
},
"jobs" : [
{
"type" : "load"
}
]
}
{
"type" : "for",
"jobs" : [
{
"type" : "load"
}
]
}
For details see mixed and weighted load specification.
TODO
TODO
For details see chain operations specification.
The delay in the example below is 1 minute. Minimum configurable delay is 1 second.
{
"type" : "chain",
"config" : [
{
"item" : {
"output" : {
"delay" : "1m",
"path" : "/default"
}
},
"load" : {
"metrics" : {
"trace" : {
"itemInfo" : true,
"reqTimeStart" : true,
"duration" : true
}
}
}
},
{
"load" : {
"type" : "read"
}
}
]
}
{
"type" : "chain",
"config" : [
{
"item" : {
"output" : {
"delay" : "1m",
"path" : "/default"
}
},
"storage" : {
"net" : {
"node" : {
"addrs" : [
"10.123.45.67",
"10.123.45.68",
"10.123.45.69",
"10.123.45.70"
]
}
}
}
},
{
"load" : {
"type" : "read"
},
"storage" : {
"net" : {
"node" : {
"addrs" : [
"10.234.56.78",
"10.234.56.79",
"10.234.56.80",
"10.244.56.81"
]
}
}
}
}
]
}
Currently the storage driver supports some cloud storages or a filesystem
Mongoose is able to work in the so called distributed mode what allows to scale out the load performed on a storage. In the distributed mode there's a instance controlling the distributed load execution progress. This instance usually called "controller" and usually should be running on a dedicated host. The controller aggregates the results from the remote (usually) storage driver services which perform the actual load on the storage.
How to:
- Start the storage driver service:
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar
- Start the controller:
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-driver-remote ...
- Start the 1st storage driver service:
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar --storage-driver-port=1099
- Start the 1st storage driver service:
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar --storage-driver-port=1100
- Start the controller:
java -jar <MONGOOSE_DIR>/mongoose.jar
--storage-driver-remote
--storage-driver-addrs=127.0.0.1:1099,127.0.0.1:1100
...
- Start the storage driver service on one host:
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar
- Start the controller on another host:
java -jar <MONGOOSE_DIR>/mongoose.jar
--storage-driver-remote
--storage-driver-addrs=<DRIVER_IP_ADDR>
...
- Start the storage driver service on each host using the following command:
java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar
- Start the controller on another host:
java -jar <MONGOOSE_DIR>/mongoose.jar
--storage-driver-remote
--storage-driver-addrs=<DRIVER1>,<DRIVER2>,...
TODO
If no authentication token is specified Mongoose tries to create one. This functionality is currently implemented for Atmos and Swift storage drivers.
If no output path is specified Mongoose tries to create it (create destination directory/bucket/container). This functionality is currently implemented for filesystem, S3 and Swift storage drivers.
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-type=fs ...
Mongoose uses the round-robin way to distribute I/O tasks if multiple storage endpoints are used. If a connection fail Mongoose will try to distribute the active connections equally among the endpoints.
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-net-ssl --storage-net-node-port=9021 ...
Sometimes the test is run against the storage via network and the storage endpoint may fail to react on a connection. Mongoose should fail such I/O task and continue to go on. There's an ability to set a response timeout which allows to interrupt the I/O task and continue to work.
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-net-timeoutMillisec=100000 ...
Mongoose automatically adopts the input and output buffer sizes depending on the load job info. For example, for create I/O type the input buffer size is set to the minimal value (4KB) and the output buffer size is set to configured data item size (if any). If read I/O type is used the behavior is right opposite - specific input buffer size and minimal output buffer size. This improves the I/O performance significantly. But users may set the buffer sizes manually.
Example: setting the input buffer to 100KB:
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-net-rcvBuf=100KB ...
Example: setting the output buffer to 10MB:
java -jar <MONGOOSE_DIR>/mongoose.jar --storage-net-sndBuf=10MB ...
Note:
- Input/output paths are not used unless the filesystem access is not configured.
An Atmos storage uses the signed requests to authenticate each of them. To sign the requests correctly Mongoose requires the correct auth id, secret and the system time differentiating the storage system time no more then 15 minutes.
Note:
- The default value of "auth-id" configuration parameter (null) doesn't work in the case of Atmos API usage.
- Mongoose will try to create the subtenant if the subtenant value is not specified.
java -jar <MONGOOSE_DIR>/mongoose.jar
--storage-auth-id=<USER_ID>
--storage-auth-secret=<SECRET>
[--storage-auth-token=<SUBTENANT>]
--storage-net-node-port=8080
--storage-net-http-api=atmos
...
TODO
Note:
S3 API is used by default Specifying the input/output path in the case of S3 API means specifying the bucket to use.
An S3 storage uses the signed requests to authenticate each of them. To sign the requests correctly Mongoose requires the correct auth id, secret and the system time differentiating the storage system time no more then 15 minutes.
java -jar <MONGOOSE_DIR>/mongoose.jar
--storage-auth-id=<USER_ID>
--storage-auth-secret=<SECRET>
--storage-net-node-port=<PORT>
...
TODO
TODO
The following example will perform the uploading the 1GB objects using 100MB parts.
java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=1GB --item-data-ranges-threshold=100MB ...
Note:
Specifying the input/output paths means specifying the input/ouput container in case of Swift.
Mongoose uses the v1 authentication way: generating the token once and using it for the requests. If the existing authentication token is not specified Mongoose will try to create it.
java -jar <MONGOOSE_DIR>/mongoose.jar
--storage-auth-id=<USER_ID>
--storage-auth-secret=<SECRET>
[--storage-auth-token=<AUTH_TOKEN>]
--storage-net-node-port=8080
--storage-net-http-api=swift
...
TODO
java -jar <MONGOOSE_DIR>/mongoose.jar
--item-data-size=1GB
--item-data-ranges-threshold=100MB
--storage-net-http-api=swift ...
- Overview
- Deployment
- User Guide
- Troubleshooting
- Reference