Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: #376 annotationless support perception class #377

Merged
merged 49 commits into from
Mar 21, 2024
Merged
Show file tree
Hide file tree
Changes from 40 commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
8c3a2ad
feat: add classification
hayato-m126 Mar 7, 2024
fc1510d
feat: update test
hayato-m126 Mar 8, 2024
5941602
feat: update scenario class
hayato-m126 Mar 8, 2024
4cace37
feat: deviation class container
hayato-m126 Mar 8, 2024
fc7cb82
feat: scenario version 0.2.0
hayato-m126 Mar 11, 2024
1de0456
fix: bug
hayato-m126 Mar 11, 2024
e969dad
chore: test create literal from tuple
hayato-m126 Mar 11, 2024
0d698b9
fix: output frame
hayato-m126 Mar 11, 2024
b056bf2
fix: overwrite diag
hayato-m126 Mar 11, 2024
2d0cb17
chore: update set undefined status threshold
hayato-m126 Mar 11, 2024
4d5c232
feat: write final metrics
hayato-m126 Mar 11, 2024
3d9c81d
feat: override condition
hayato-m126 Mar 12, 2024
e264312
fix: set_threshold
hayato-m126 Mar 13, 2024
df43620
fix: unit test
hayato-m126 Mar 13, 2024
ecab58d
chore: update test
hayato-m126 Mar 13, 2024
c882d10
chore: disable set pass_range using launch argument
hayato-m126 Mar 13, 2024
0f79946
fix: pre-commit
hayato-m126 Mar 13, 2024
6f16d92
docs: update image
hayato-m126 Mar 13, 2024
bdf6dd3
docs: update document
hayato-m126 Mar 13, 2024
d00582c
fix: pre-commit
hayato-m126 Mar 14, 2024
5b6057b
feat: update package.xml
hayato-m126 Mar 14, 2024
54d1354
feat: update dependency.repos
hayato-m126 Mar 14, 2024
75e2986
chore: debug pass_range
hayato-m126 Mar 14, 2024
1bb0cf8
feat: update sample scenario
hayato-m126 Mar 14, 2024
fc464d4
docs: update English document
hayato-m126 Mar 14, 2024
57b280f
fix: launch arg type error
hayato-m126 Mar 15, 2024
7fdf9f9
feat: cli support dict type argument
hayato-m126 Mar 15, 2024
6e0b54a
feat: update launch argument
hayato-m126 Mar 15, 2024
4eb0e13
feat: Only items for which evaluation conditions are set are subject …
hayato-m126 Mar 15, 2024
55b581a
fix: pre-commit
hayato-m126 Mar 15, 2024
4b63e42
feat: update set condition from result.jsonl
hayato-m126 Mar 15, 2024
3943794
Update docs/use_case/annotationless_perception.en.md
hayato-m126 Mar 15, 2024
7ed7278
fix: typo
hayato-m126 Mar 15, 2024
bff3bad
Update docs/use_case/annotationless_perception.en.md
hayato-m126 Mar 18, 2024
002d455
Update docs/use_case/annotationless_perception.en.md
hayato-m126 Mar 18, 2024
678d778
Update docs/use_case/annotationless_perception.ja.md
hayato-m126 Mar 18, 2024
5b0cb06
Update docs/use_case/annotationless_perception.ja.md
hayato-m126 Mar 18, 2024
b0f483d
chore: delete unused method
hayato-m126 Mar 18, 2024
e7711c7
chore: update test
hayato-m126 Mar 18, 2024
dbea8be
fix: test
hayato-m126 Mar 18, 2024
d841689
feat: update log message
hayato-m126 Mar 18, 2024
53c6754
fix: pre-commit
hayato-m126 Mar 19, 2024
82e2953
feat: update pass fail logic
hayato-m126 Mar 19, 2024
cde16b1
fix: unit test and calculation logic
hayato-m126 Mar 19, 2024
3573077
docs: update document
hayato-m126 Mar 19, 2024
77dc2a9
fix: unit test
hayato-m126 Mar 19, 2024
b94dd3b
feat: support PassRange dict
hayato-m126 Mar 21, 2024
1e4f1f4
docs: update document
hayato-m126 Mar 21, 2024
a4801e6
fix: lint
hayato-m126 Mar 21, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions dependency.repos
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,6 @@ repositories:
type: git
url: https://github.com/tier4/tier4_autoware_msgs.git
version: tier4/universe
# launcher
launcher/autoware_launch:
type: git
url: https://github.com/autowarefoundation/autoware_launch.git
version: main
# simulator
simulator/perception_eval:
type: git
Expand Down
188 changes: 109 additions & 79 deletions docs/use_case/annotationless_perception.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Evaluate Autoware's recognition features (perception) without annotations using the perception_online_evaluator.

Requires Autoware with the following PR features.
<https://github.com/autowarefoundation/autoware.universe/pull/6493>
<https://github.com/autowarefoundation/autoware.universe/pull/6556>

## Evaluation method

Expand All @@ -18,7 +18,12 @@ Launching the file executes the following steps:

## Evaluation results

The results are calculated for each subscription. The format and available states are described below.
The output topic of perception_online_evaluator is in the form of the following sample.
[topic sample](https://github.com/tier4/driving_log_replayer/blob/main/sample/annotationless_perception/diag_topic.txt)

For each subscription, the following judgment results are output for each recognition class.

If all classes are normal, the test is successful.

### Deviation Normal

Expand All @@ -32,6 +37,8 @@ The following two values specified in the scenario or launch argument are used t
Add the min, max, and mean values for each status.name in `/diagnostic/perception_online_evaluator/metrics` and calculate the average value.
If the `threshold * lower limit` <= `calculated_average` <= `threshold value * upper_limit`, it is assumed to be normal.

Items for which no threshold is set (min, max, mean) are always judged as normal. Only those items for which a threshold is specified are subject to evaluation.

An illustration is shown below.

![metrics](./images/annotationless_metrics.drawio.svg)
Expand Down Expand Up @@ -63,23 +70,37 @@ The conditions can be given in two ways
```yaml
Evaluation:
UseCaseName: annotationless_perception
UseCaseFormatVersion: 0.1.0
UseCaseFormatVersion: 0.2.0
Conditions:
# Threshold: {} # If Metrics are specified from result.jsonl of a previous test, the value here will be overwritten. If it is a dictionary type, it can be empty.
Threshold:
lateral_deviation: { min: 10.0, max: 10.0, mean: 10.0 }
yaw_deviation: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_5.00: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_3.00: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_2.00: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_1.00: { min: 10.0, max: 10.0, mean: 10.0 }
PassRange: 0.5-1.05 # lower[<=1.0]-upper[>=1.0] # The test will pass under the following `condition threshold * lower <= Σ deviation / len(deviation) <= threshold * upper`
ClassConditions:
# Describe the conditions for each class. If a class with no conditions is output, only the metrics are calculated. It does not affect the evaluation.
# In the sample data, the class of TRUCK is also output, but the condition is not described, so TRUCK is always Success.
# When specifying conditions from result.jsonl, only keys described here will be updated.
# Even though TRUCK metrics appear in result.jsonl, they are not added to the evaluation condition because the TRUCK key is not specified in this example.
CAR: # classification key
Threshold:
# Keys not described will not be evaluated (will always be a success)
lateral_deviation: { min: 10.0, max: 10.0, mean: 10.0 }
yaw_deviation: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_5.00: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_3.00: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_2.00: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_1.00: { min: 10.0, max: 10.0, mean: 10.0 }
PassRange: 0.5-1.05 # lower[<=1.0]-upper[>=1.0] # The test will pass under the following `condition threshold * lower <= Σ deviation / len(deviation) <= threshold * upper`
BUS: # classification key
Threshold:
# Only lateral_deviation is evaluated.
lateral_deviation: { max: 10.0 } # Only max is evaluated.
PassRange: 0.5-1.05 # lower[<=1.0]-upper[>=1.0] # The test will pass under the following `condition threshold * lower <= Σ deviation / len(deviation) <= threshold * upper`
```

#### Specify by launch argument

This method is assumed to be used mainly.
If the file path of result.jsonl output from a past test is specified, the metrics values from past tests can be used as threshold values.

If the file path of result.jsonl output from past tests is specified, the metrics values from past tests are used as threshold values.
The values are updated from result.jsonl only for the thresholds listed in the scenario.

The passing range can also be specified as an argument.

An image of its use is shown below.
Expand All @@ -89,13 +110,16 @@ An image of its use is shown below.
##### driving-log-replayer-cli

```shell
dlr simulation run -p annotationless_perception -l "annotationless_thresold_file:=${previous_test_result.jsonl_path},annotationless_pass_range:=${lower-upper}
dlr simulation run -p annotationless_perception -l 'annotationless_threshold_file=${previous_test_result.jsonl_path},annotationless_pass_range:={"KEY1":VALUE1"[,"KEY2":"VALUE2"...]}'

# example
dlr simulation run -p annotationless_perception -l 'annotationless_threshold_file:=$HOME/out/annotationless/2024-0314-155106/sample/result.jsonl,annotationless_pass_range:={"CAR":"0.2-1.2","BUS":"0.3-1.3"}'
```

##### WebAutoCLI

```shell
webauto ci scenario run --project-id ${project-id} --scenario-id ${scenario-id} --scenario-version-id ${scenario-version-id} --simulator-parameter-overrides annotationless_thresold_file=${previous_test_result.jsonl_path},annotaionless_pass_rate=${lower-upper}
webauto ci scenario run --project-id ${project-id} --scenario-id ${scenario-id} --scenario-version-id ${scenario-version-id} --simulator-parameter-overrides 'annotationless_threshold_file=${previous_test_result.jsonl_path},annotationless_pass_range:={"KEY1":VALUE1"[,"KEY2":"VALUE2"...]}'
```

##### Autoware Evaluator
Expand All @@ -114,7 +138,9 @@ simulations:
type: simulator/standard1/amd64/medium
parameters:
annotationless_threshold_file: ${previous_test_result.jsonl_path}
annotationless_pass_range: ${upper-lower}
annotationless_pass_range:
KEY1: VALUE1
KEY2: VALUE2
```

## Arguments passed to logging_simulator.launch
Expand Down Expand Up @@ -220,70 +246,74 @@ The format of each frame and the metrics format are shown below.

```json
{
"Deviation": {
"Result": { "Total": "Success or Fail", "Frame": "Success or Fail" }, // The results for Total and Frame are the same. The same values are output to make the data structure the same as other evaluations.
"Info": {
"lateral_deviation": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"yaw_deviation": {
"min": "Minimum Angle Difference",
"max": "Maximum Angle Difference",
"mean": "Mean Angle Difference"
},
"predicted_path_deviation_5.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"predicted_path_deviation_3.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"predicted_path_deviation_2.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"predicted_path_deviation_1.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
}
},
"Metrics": {
"lateral_deviation": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"yaw_deviation": {
"min": "Average Minimum Angle Difference",
"max": "Average Maximum Angle Difference",
"mean": "Average Mean Angle Difference"
},
"predicted_path_deviation_5.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"predicted_path_deviation_3.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"predicted_path_deviation_2.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
"Frame": {
"Ego": {},
"OBJECT_CLASSIFICATION": {
// Recognized class
"Result": { "Total": "Success or Fail", "Frame": "Success or Fail" }, // The results for Total and Frame are the same. The same values are output to make the data structure the same as other evaluations.
"Info": {
"lateral_deviation": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"yaw_deviation": {
"min": "Minimum Angle Difference",
"max": "Maximum Angle Difference",
"mean": "Mean Angle Difference"
},
"predicted_path_deviation_5.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"predicted_path_deviation_3.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"predicted_path_deviation_2.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"predicted_path_deviation_1.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
}
},
"predicted_path_deviation_1.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
"Metrics": {
"lateral_deviation": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"yaw_deviation": {
"min": "Average Minimum Angle Difference",
"max": "Average Maximum Angle Difference",
"mean": "Average Mean Angle Difference"
},
"predicted_path_deviation_5.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"predicted_path_deviation_3.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"predicted_path_deviation_2.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"predicted_path_deviation_1.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
}
}
}
}
Expand Down
Loading
Loading