Skip to content

Commit

Permalink
Update data upload script and scuttle instructions (#3511)
Browse files Browse the repository at this point in the history
  • Loading branch information
npentrel authored Sep 28, 2024
1 parent b1b2a1d commit 46fb383
Show file tree
Hide file tree
Showing 2 changed files with 127 additions and 6 deletions.
5 changes: 3 additions & 2 deletions docs/get-started/detect-people.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ Create the service.

In the resulting ML model service configuration pane, ensure that **Deploy model on machine** is selected for the **Deployment** field.

Click on **Select model** and select the `ml-models-scuttle:people` model from the registry (the **people** model by **ml-models-scuttle**) to deploy an object detection TFLite model that has been trained to be able to detect a person.
Click on **Select model**, switch to the **Registry** tab and select the **people** model by **ml-models-scuttle** to deploy an object detection TFLite model that has been trained to be able to detect a person.

For more detailed information, including optional attribute configuration, see the [`tflite_cpu` docs](/services/ml/deploy/tflite_cpu/).

Expand All @@ -134,11 +134,12 @@ This service applies the ML model to the camera input stream.
Add a `vision` **Service** and select the `ML model` model.

Select the ML model service your person detector model is deployed on (which you created in step 4) from the **ML Model** dropdown.
Click the **Save** button in the top right corner of the page to save your vision service configuration.

{{% /expand%}}
{{%expand "Step 6: Test person detection" %}}

Click the **Save** button in the top right corner of the page to save your configuration.

Now, test your person detection in the **Test** section of the computer vision service's configuration panel or on the **CONTROL** tab.

You will see your camera stream and see detections as labeled boxes on the images along with labels and confidence data.
Expand Down
128 changes: 124 additions & 4 deletions docs/how-tos/deploy-ml.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,12 +101,132 @@ We have two datasets you can use for testing, one with shapes and the other with
1. [Download the shapes dataset](https://storage.googleapis.com/docs-blog/dataset-shapes.zip) or [download the wooden figure dataset](https://storage.googleapis.com/docs-blog/dataset-figure.zip).
1. Unzip the download.
1. Open a terminal and go to the dataset folder.
1. In it you will find a Python script to upload the data to the Viam app.
1. Open the script and fill in the constants at the top of the file.
1. Run the script to upload the data into a dataset in Viam app:
1. Create a python script in the dataset's folder with the following contents:

```python {class="line-numbers linkable-line-numbers"}
# Assumption: The dataset was exported using the `viam dataset export` command.
# This script is being run from the `destination` directory.

import asyncio
import os
import json
import argparse

from viam.rpc.dial import DialOptions, Credentials
from viam.app.viam_client import ViamClient
from viam.proto.app.data import BinaryID

async def connect(args) -> ViamClient:
dial_options = DialOptions(
credentials=Credentials(
type="api-key",
payload=args.api_key,
),
auth_entity=args.api_key_id
)
return await ViamClient.create_from_dial_options(dial_options)


async def main():
parser = argparse.ArgumentParser(
description='Upload images, metadata, and tags to a new dataset')
parser.add_argument('-org-id', dest='org_id', action='store',
required=True, help='Org Id')
parser.add_argument('-api-key', dest='api_key', action='store',
required=True, help='API KEY with org admin access')
parser.add_argument('-api-key-id', dest='api_key_id', action='store',
required=True, help='API KEY ID with org admin access')
parser.add_argument('-machine-part-id', dest='machine_part_id',
action='store', required=True,
help='Machine part id for image metadata')
parser.add_argument('-location-id', dest='location_id', action='store',
required=True, help='Location id for image metadata')
parser.add_argument('-dataset-name', dest='dataset_name', action='store',
required=True,
help='Name of the data to create and upload to')
args = parser.parse_args()


# Make a ViamClient
viam_client = await connect(args)
# Instantiate a DataClient to run data client API methods on
data_client = viam_client.data_client

# Create dataset
try:
dataset_id = await data_client.create_dataset(
name=args.dataset_name,
organization_id=args.org_id
)
print("Created dataset: " + dataset_id)
except Exception:
print("Error. Check that the dataset name does not already exist.")
print("See: https://app.viam.com/data/datasets")
return 1

file_ids = []

for file_name in os.listdir("metadata/"):
with open("metadata/" + file_name) as f:
data = json.load(f)
tags = None
if "tags" in data["captureMetadata"].keys():
tags = data["captureMetadata"]["tags"]

annotations = None
if "annotations" in data.keys():
annotations = data["annotations"]

image_file = data["fileName"]

print("Uploading: " + image_file)

id = await data_client.file_upload_from_path(
part_id=args.machine_part_id,
tags=tags,
filepath=os.path.join("data/", image_file)
)
print("FileID: " + id)

binary_id = BinaryID(
file_id=id,
organization_id=args.org_id,
location_id=args.location_id
)

if annotations:
bboxes = annotations["bboxes"]
for box in bboxes:
await data_client.add_bounding_box_to_image_by_id(
binary_id=binary_id,
label=box["label"],
x_min_normalized=box["xMinNormalized"],
y_min_normalized=box["yMinNormalized"],
x_max_normalized=box["xMaxNormalized"],
y_max_normalized=box["yMaxNormalized"]
)

file_ids.append(binary_id)

await data_client.add_binary_data_to_dataset_by_ids(
binary_ids=file_ids,
dataset_id=dataset_id
)
print("Added files to dataset.")
print("https://app.viam.com/data/datasets?id=" + dataset_id)

viam_client.close()

if __name__ == '__main__':
asyncio.run(main())
```

1. Run the script to upload the data into a dataset in Viam app providing the following input:

```sh {class="command-line" data-prompt="$" }
python3 upload_data.py
python upload_data.py -org-id <ORG-ID> -api-key <API-KEY> \
-api-key-id <API-KEY-ID> -machine-part-id <MACHINE-PART-ID> \
-location-id <LOCATION-ID> -dataset-name <NAME>
```

1. Continue to [Train a machine learning model](#train-a-machine-learning-ml-model).
Expand Down

0 comments on commit 46fb383

Please sign in to comment.