Skip to content

v3.0 How To

Andrey Kurilov edited this page Feb 22, 2017 · 1 revision

Contents

  1. Reporting
  2. Load Types
    1. Write
      1. Create
      2. Create the data items with fixed specified size
      3. Create the data items with random size in the specified range
      4. Create the data items with random size in the specified range and with biased size distribution
      5. Update
      6. Append
      7. Copy
    2. Read
      1. Verification
      2. Disable Verification
    3. Delete
  3. Load Job Limit
    1. Limit by Count
    2. Limit by Time
    3. Limit by Rate (Throttling)
    4. Limit by Size
  4. Run Modes
    1. Standalone Mode
    2. Distributed Mode
      1. Load Server
      2. Load Client
    3. Storage Mock
    4. Web GUI
  5. Item Types
    1. Container
      1. Write the containers
      2. Read the containers with Data Items
      3. Delete the containers
    2. Data
  6. Cloud Storage API
    1. Amazon S3
    2. EMC Atmos
    3. OpenStack Swift
    4. EMC ECS
      1. S3
      2. Atmos
      3. Swift
  7. Filesystem Load
    1. Write to the custom directory
    2. Read from the custom directory
    3. Overwrite the files circularly
  8. Custom Content
    1. Text content
    2. Zero bytes content
  9. Circular Load
    1. Read
    2. Update
  10. Scenario
    1. Configure a Load Job
    2. Make a Precondition Load Job (don't persist the metrics)
    3. Sequential Load Jobs execution
    4. Parallel Load Jobs execution
    5. Reuse the Items for another Load Job
    6. Inherit the Load Job Container configuration
    7. Execute a Shell Command
    8. Start a Non-Blocking Shell Command Execution
    9. Sleep Between the Jobs
    10. Mixed Load
    11. Weighted Load
    12. Rampup
    13. Scenario Validation
    14. Execute a job For Each value from the list
    15. Execute the infinite jobs loop
    16. Execute the jobs 10 times
    17. Execute a job for a specified numbers range
  11. Dynamic Configuration Values
    1. Custom HTTP headers
    2. Custom HTTP headers with Dynamic Values
    3. Filesystem Load: Dynamic Target Path
  12. Custom Items Naming
    1. Ascending names order
    2. Descending names order
    3. Names with decimal identifiers
    4. Names with prefixes
  13. SSL/TLS support
  14. Miscellaneous
    1. Docker integration
    2. Disable console output coloring

Reporting

Run report is a set of files Mongoose produces in a directory <MONGOOSE_DIR>/log/<RUN_ID>. Starting with Mongoose 0.8, all the key log files (items.csv, perf.avg.csv, perf.trace.csv, and perf.sum.csv) are produced in pure CSV format. You can use any mature tool that supports CSV format to open and process report components.

As an example, suppose we had a Mongoose run that produced 10 data items of random size and we would like to calculate total size of the generated content. You can easily get the result by opening items.csv in any spreadsheet editor and selecting the third column with data item sizes. The total size can be found on a status bar as a Sum value.

Load Types

Write

Example scenarios location: scenario/write/*.json

Create

Mongoose creates the items by default (if load type is not specified). So it's enough just to run the default scenario:

java -jar <MONGOOSE_DIR>/mongoose.jar

Create the data items with fixed specified size

java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=100

Create the data items with random size in the specified range

java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=4KB-16KB

Create the data items with random size in the specified range and with biased size distribution

java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-size=0-100MB,2.5 ...

Update

In order to enable the update mode for the Write load type it's neccessary to specify the random byte ranges count.

Example scenarios location: scenario/partial/update-multiple-random-ranges.json

The example below performs the data items update from the specified source file with 10 random byte ranges per request.

java -jar <MONGOOSE_DIR>/mongoose.jar --update --item-data-ranges-random=10 --item-input-file=<PATH_TO_ITEM_LIST_CSV_FILE> ...

Append

In order to enable the append mode for the Write load type it's neccessary to specify the fixed byte range with start offset equal to the size of the data items which should be updated.

Example scenarios location: scenario/partial/append.json

The example below performs the data items append with the appendage size of 8KB.

java -jar <MONGOOSE_DIR>/mongoose.jar --update --item-data-ranges=-8KB --item-input-file=<PATH_TO_ITEM_LIST_CSV_FILE> ...

Copy

Example scenarios location: scenario/copy/*.json

The example below performs the items copying from the source container to the target container:

java -jar <MONGOOSE_DIR>/mongoose.jar [--item-output-container=<TARGET_CONTAINER>] --item-input-container=<SOURCE_CONTAINER> [--item-input-file=<PATH_TO_ITEMS_LIST_CSV_FILE>] ...

See Mongoose Copy Mode functional specification for details

Read

In order to use Read load type it's neccessary to set "read" value to the "load.type" configuration parameter.

Example scenarios location: scenario/read/*.json

Enable Verification

Example scenarios location: scenario/read/read-verify-updated.json

java --read --item-data-verify --item-output-container=<CONTAINER_WITH_ITEMS> ... -jar <MONGOOSE_DIR>/mongoose.jar

Delete

In order to use Read load type it's neccessary to set "--delete" configuration parameter.

Example scenarios location: scenario/delete/*.json

java -jar <MONGOOSE_DIR>/mongoose.jar --delete --item-output-container=<CONTAINER_WITH_ITEMS> ...

Load Job Limit

Example scenarios location: scenario/limit/*.json

It's possible to limit the load jobs by any combination of 4 possible ways.

Limit by Count

Example scenarios location: scenario/limit/by-count.json

Load with no more than N items:

java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-count=<N> ...

Limit by Time

Example scenarios location: scenario/limit/by-time.json Perform a load job for no more than 1 hour:

java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-time=1h ...

Limit by Rate (Throttling)

Example scenarios location: scenario/limit/by-rate.json

Perform a load job with the rate of no more than 1234.5 items (and operations) per second.

java -jar <MONGOOSE_DIR>/mongoose.jar [--item-data-size=0] --load-limit-rate=1234.5 [--load-concurrency=1000] ...

Limit by Size

Example scenarios location: scenario/limit/by-size.json

Load with data items having the summary size of no more than 100GB:

java -jar <MONGOOSE_DIR>/mongoose.jar --load-limit-size=100GB ...

Run Modes

Standalone Mode

Mongoose runs in the standalone mode by default

java -jar <MONGOOSE_DIR>/mongoose.jar --run-file=<PATH_TO_SCENARIO_FILE>

Distributed Mode

Example scenarios location: scenario/distributed/*.json

Load Server

java -jar <MONGOOSE_DIR>/mongoose-storage-driver-service.jar

Load Client

java -jar <MONGOOSE_DIR>/mongoose.jar --storage-driver-remote [--storage-driver-addrs=A,B,C,D] --run-file=<PATH_TO_SCENARIO_FILE>

Storage Mock

java -jar <MONGOOSE_DIR>/mongoose-storage-mock.jar

Web GUI

Not implemented yet

java -jar <MONGOOSE_DIR>/mongoose-gui.jar

Item Types

Container

In order to perform a load with container items it's neccessary to set "container" value to the "item.type" configuration parameter.

Example scenarios location: scenario/container/*.json

Write the containers

Not implemented yet

Example scenarios location: scenario/container/write-containers.json

java -jar <MONGOOSE_DIR>/mongoose.jar --item-type=container ...

Read the containers with Data Items

Not Implemented yet

Example scenarios location: scenario/container/read-containers-with-items.json

java -jar <MONGOOSE_DIR>/mongoose.jar --item-type=container --read --item-input-file=<PATH_TO_ITEMS_LIST_CSV_FILE> ...

Note that the total byte count and bytes per second (BW) metrics are calculated while reading the containers with data items. The size of the container is calculated as a sum of the included data items sizes.

Delete the containers

Example scenarios location: scenario/container/delete-containers.json

java -jar <MONGOOSE_DIR>/mongoose.jar --item-type=container --delete --item-input-file=<PATH_TO_ITEMS_LIST_CSV_FILE> ...

Data

The "data" item type is used by default.

Cloud Storage API

Amazon S3

Example scenarios location: scenario/ecs/write-s3.json Note S3 API is used by default Specifying the container name in the case of S3 API means specifying the bucket to use.

java --storage-auth-id=<USER_ID> --storage-auth-secret=<SECRET> [--item-output-container=<TARGET_BUCKET>] --storage-node-addrs=10.20.30.40 --storage-port=8080 -jar <MONGOOSE_DIR>/mongoose.jar

EMC Atmos

Example scenarios location: scenario/ecs/write-atmos.json

java --storage-auth-id=<USER_ID> [--storage-auth-token=<SUBTENANT>] --storage-auth-secret=<SECRET> --storage-auth-secret=WQmcQh5UYRAWYqJGCVEueihGBZ7h6nI2vHHwYmPg --storage-node-addrs=10.20.30.40 --storage-port=8080 --storage-http.api=atmos -jar <MONGOOSE_DIR>/mongoose.jar

Note

The default value of "auth.id" configuration parameter (null) doesn't work in the case of Atmos API usage.

OpenStack Swift

Example scenarios location: scenario/ecs/write-swift.json

java --storage-auth-id=<USER_ID> [--storage-auth-token=<TOKEN>] --storage-auth-secret=<SECRET> [--item-output-container=<TARGET_CONTAINER>] --storage-node-addrs=10.20.30.40 --storage-port=8080 --storage-http.api=swift --storage-http.namespace=<NS> -jar <MONGOOSE_DIR>/mongoose.jar

Note

The default value of "storage.http.namespace" configuration parameter (null) doesn't work in the case of Swift API usage.

EMC ECS

Example scenarios location: scenario/ecs/*.json

S3

java -jar <MONGOOSE_DIR>/mongoose.jar [email protected] --storage-auth-secret=<SECRET> [--item-output-container=<TARGET_BUCKET>] --storage-node-addrs=10.20.30.40,10.20.30.41,10.20.30.42 --storage-port=9020

Atmos

java -jar <MONGOOSE_DIR>/mongoose.jar [email protected] [--storage-auth-token=<SUBTENANT>] --storage-auth-secret=<SECRET> --storage-node-addrs=10.20.30.40,10.20.30.41,10.20.30.42 --storage-port=9022

Swift

java -jar <MONGOOSE_DIR>/mongoose.jar [email protected] [--storage-auth-token=<TOKEN>] --storage-auth-secret=<SECRET> [--item-output-container=<TARGET_CONTAINER>] --storage-node-addrs=10.20.30.40,10.20.30.41,10.20.30.42 --storage-port=9024 --storage-http=api=swift --storage-http-namespace=s3

Filesystem Load

In order to use Filesystem load engine it's neccessary to set "fs" value to the "storage.type" configuration parameter.

Example scenarios location: scenario/fs/*.json

Write to the custom directory

Example scenarios location: scenario/fs/write-to-custom-dir.json

java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-container=<PATH_TO_TARGET_DIR> --storage-type=fs

Read from the custom directory

Example scenarios location: scenario/fs/read-from-custom-dir.json

java-jar <MONGOOSE_DIR>/mongoose.jar --item-output-container=<PATH_TO_TARGET_DIR> [<ITEM_SRC_FILE_OR_CONTAINER>] --read --storage-type=fs 

Overwrite the files circularly

Example scenarios location: scenario/fs/overwrite-circularly.json

java -jar <MONGOOSE_DIR>/mongoose.jar --update --item-output-container=<PATH_TO_TARGET_DIR> [<ITEM_SRC_FILE_OR_CONTAINER>] --load-circular=true --storage-type=fs

Custom Content

An user may use a custom file as the content source for the data generation and verification. This custom file path should be specified as the "--item-data-content-file" configuration parameter.

Example scenarios location: scenario/content/*.json

Note

The same content source should be used for the data items writing and subsequent reading in order to pass data verification

Text content

java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-content-file=./textexample ...

Zero bytes content

java -jar <MONGOOSE_DIR>/mongoose.jar --item-data-content-file=./zerobytes ...

Circular Load

In order to load with the fixed set of the items "infinitely" (each items is being written/read again and again) an user should set the configuration parameter "load.circular" to true.

Example scenarios location: scenario/circular/*.json

Read

java -jar <MONGOOSE_DIR>/mongoose.jar --read --item-output-container=<CONTAINER_WITH_ITEMS> [<ITEM_SRC_FILE_OR_CONTAINER>] --load-circular=true ...

Update

java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-container=<CONTAINER_WITH_ITEMS> [<ITEM_SRC_FILE_OR_CONTAINER>] --item-data-ranges=1 --load-circular=true ...

Scenario

Configure a Load Job

{
   "type" : "load",
   "config" : {
      // here are the configuration hierarchy
   }
}

Make a Precondition Load Job (don't persist the metrics)

{
   "type" : "precondition",
   "config" : {
      // here are the configuration hierarchy
   }
}

Sequential Load Jobs execution

{
   "type" : "sequential",
   "jobs" : [
      {
         "type" : "",
         ...
      }, {
         "type" : "",
         ...
      }
      ...
   ]
}

Parallel Load Jobs execution

{
   "type" : "parallel",
   "jobs" : [
      {
         "type" : "",
         ...
      }, {
         "type" : "",
         ...
      }
      ...
   ]
}

Reuse the Items for another Load Job

{
   "type" : "sequential",
   "jobs" : [
      {
         "type" : "precondition",
         "config" : {
            "item" : {
               "output" : {
                  "file" : "items.csv"
               }
            }
            ...
         }
      }, {
         "type" : "",
         "config" : {
            "item" : {
               "input" : {
                  "file" : "items.csv"
               }
            }
            ...
         }
      }
   ]
}

Inherit the Load Job Container configuration

{
   "type" : "sequential",
   "config" : {
      // the configuration specified here will be inherited by the container elements
   },
   "jobs" : [
      {
         "type" : "load",
         ...
      }
      ...
   ]
}

Execute a Shell Command

{
   "type" : "command",
   "value" : "killall -9 java",
}

Start a Non-Blocking Shell Command Execution

{
   "type" : "command",
   "value" : "find /",
   "blocking" : false
}

Sleep Between the Load Jobs

{
   "type" : "sequential",
   "config" : {
      // shared configuration values inherited by the children jobs
   },
   "jobs" : [
      {
         "type" : "load",
         "config" : {
            // specific configuration for the 1st load job
         }
      }, {
         "type" : "command",
         "value" : "sleep 5s"
      }, {
         "type" : "load",
         "config" : {
            // specific configuration for the 2nd load job
         }
      }
   ]
}

Mixed Load

Please refer to example scenarios located at: scenario/mixed/*.json

Weighted Load

Please refer to example scenario located at: scenario/weighted/*.json

Scenario Validation

There are a JSON schema file in the distribution: scenario/scenario-schema.json. An user may automatically validate the scenarios using this schema. This should help to write one's own custom scenario.

Execute a Job For Each value from the list

{
   "type" : "for",
   "value" : "concurrency",
   "in" : [
      1, 10, 100, 1000, 10000, 100000
   ],
   "config" : {
      "load" : {
         "concurrency" : "${concurrency}"
      }
   },
   "jobs" : [
      {
            "type" : "load"
      }
   ]
}

Execute the infinite jobs loop

{
   "type" : "for",
   "jobs" : [
      {
            "type" : "load"
      }
   ]
}

Execute the jobs 10 times

{
    "type" : "for",
    "value" : 10,
    "jobs" : [
        {
            "type" : "load"
        }
    ]
}

Execute a job for a specified numbers range

{
    "type" : "for",
    "value" : "i",
    "in" : "2.71828182846-3.1415926,0.1",
    "jobs" : [
        {
            "type" : "command",
            "value" : "echo ${i}"
        }
    ]
}

Dynamic Configuration Values

Example scenarios location: scenario/dynamic/*.json

Custom HTTP Headers

Example scenarios location: scenario/dynamic/custom-http-headers.json

java -jar <MONGOOSE_DIR>/mongoose.jar --storage-http-headers-myOwnHeaderName=MyOwnHeaderValue

Custom HTTP Headers with dynamic values

Example scenarios location: scenario/dynamic/custom-http-headers-with-dynamic-values.json

java -jar <MONGOOSE_DIR>/mongoose.jar --storage-http-headers-myOwnHeaderName=MyOwnHeaderValue\ %d[0-1000]\ %f{###.##}[-2--1]\ %D{yyyy-MM-dd'T'HH:mm:ssZ}[1970/01/01-2016/01/01]

Filesystem Load: dynamic target path

Example scenarios location: scenario/dynamic/write-to-variable-dir.json

java -jar <MONGOOSE_DIR>/mongoose.jar --item-output-container=<PATH_TO_TARGET_DIR>/%p\{16\;2\} --storage-type=fs ... 

Custom Items Naming

Example scenarios location: scenario/naming/*.json

Ascending names order

java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-type=asc ...

Descending names order

java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-type=desc ...

Names with decimal identifiers

java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-radix=10 ...

Names with prefixes

java -jar <MONGOOSE_DIR>/mongoose.jar --item-naming-prefix=item_ ...

TLS Support

The feature is available since v2.1.0

Example scenarios location: scenario/ssl/*.json

java -jar <MONGOOSE_DIR>/mongoose.jar -run-file=<MONGOOSE_DIR>/scenario/ssl/write-single-item.json

or

java -jar <MONGOOSE_DIR>/mongoose.jar --storage-ssl --storage-port=9021 ...

Miscellaneous

Docker Integration

Please refer to Mongoose Usage/Docker page for reference

Disable console output coloring

Go to the file conf/logging.json using the text editor, then go to the line ~#45 in the attribute "pattern" value remove the leading "%highlight{" and trailing "}" characters

Clone this wiki locally