This project provides a multiple-stream, real-time inference pipeline based on cloud native design pattern as following architecture diagram:
Cloud-native technologies can be applied to Artificial Intelligence (AI) for scalable application in dynamic environments such as public, private and hybrid cloud. But it requires a cloud native design to decompose monolithic inference pipeline into several microservices:
Microservice | Role | Description |
---|---|---|
Transcoding Gateway | Data Source | Receive multiple streams and perform transcoding |
Frame Queue | Data Integration | Assign the input stream into specific work queue |
Infer Engine | Data Analytics | Infer the frame and send result to result broker |
Dashboard | Data Visualization | Render the result into client's single page application |
It is extended for the following uses:
End-to-End Macro Bench Framework
for cloud native pipeline like DeathStar BenchTrusted AI pipeline
to protect input stream or model in TEE VM/ContainerSustainable AI computing
to reduce carbon footprint for AI workloads
The provided build script simplifies the process of building Docker images for our microservices. For instance, to build all Docker images, use the following command:
./tools/docker_image_manager.sh -a build -r <your-registry>
The -a
argument specifies the action(either build
, publish
, save
or all
), and -r
is the prefix string for your docker registry
You can get more detail options and arguments for docker_image_manager.sh
via ./tools/docker_image_manager.sh -h
The Dockerfile is under the directories in container
Note: This is pre-release/prototype software and, as such, it may be substantially modified as updated versions are made available. Also, the authors make no assurance that it will ever develop or make generally available a production-ready version.