-
Notifications
You must be signed in to change notification settings - Fork 77
Home
Deepy is a free, open-source Multiskill AI Assistant built using DeepPavlov Conversational AI Stack. It is built on top of DeepPavlov Agent running as container in Docker. It runs on x86_64 machines, and prefers having NVIDIA GPUs on the machine. You can use it to study how multiskill AI assistants can be built, as a base for research projects, or for commercial systems where running open source system on-premises (or in the cloud) is a requirement.
Deepy is based on a DeepPavlov Agent, a free, open-source Conversational Orchestrator that runs in Docker container. The rest of the AI Assistant runs as a collection of Docker containers. These containers include annotators, skills, skill and response selectors, as well as a number of supporting services that are used by these containers.
For many companies and individuals two key questions about any piece of software are availability of the source code and the license. Deepy is open source. The entire source code is available for anyone to use and modify as they see fit, for academic, personal, or commercial purposes. In particular, companies may use Deepy in whole or in part in products. Furthermore, it is completely free of charge. Support and consultancy may be available for a fee. Companies should contact us at [email protected] for pricing.
Deepy is available under an Apache 2.0 license, which may be attractive to companies since it does not require them to publish changes they make to the system as the GPL does.
Deepy was originally built in October 2020 as a demo of a simple multiskill AI Assistant to be shown at DeepPavlov's talk at NVIDIA GTC Fall 2020 Conference. However, Deepy is based on DeepPavlov Dream AI Assistant Demo, which in turn is an adaptation of the original DREAM Socialbot created by DeepPavlov's student team for Alexa Prize Grand Socialbot Challenge 3 (2019-2020).
Getting Deepy up and running is quite simple, but it might require having GPU(s) depending on the number of GPU-heavy components you want to use in your solution.
- Clone the repository
- Change directory to it
- Pick the distribution you want to run from
/assistant_dists
- Copy
docker-compose.yml
from it to the root directory of your repository (system will ask you to confirm rewriting the existing one; confirm it) - Copy
pipeline_conf.json
from it to the/agent
of your repository (system will ask you to confirm rewriting the existing one; confirm it) - Write the command
docker-compose -f docker-compose.yml build
to build your distribution but don't press ENTER just yet. - [Optional] If you want to use ASR & TTS modules, add them to the
docker-compose
command file chain; in this case the command below would look like this:docker-compose -f docker-compose.yml -f ass_tts.yml build
- For GPU-intensive services, change lines in your
docker-compose.yml
andasr_tts.yml
(optional) to specify GPUs you want to run them on. Typical GPU-intensive service that uses BERT model needs ~4GB of GPU RAM so plan accordingly. - Build your system by using the line you've formed above
- Once done, use the same line but also replace
build
in the end withup
. - Once the agent is up and running, use your favorite tool (e.g.,
curl
or Postman) to talk to Deepy viahttp://localhost:4242/
endpoint by providing the following content:
{
"user_id" : "24424252524525",
"payload" : "Hello!"
}
If everything works correctly, you'll get the response like this:
{
"dialog_id": "773dc35a567072c142b9d6d8bdea00fb",
"utt_id": "dfd1c751875439c6d98e3bb1410448f7",
"user_id": "r234242343",
"response": "Hello, I'm a lunar assistant Deepy! How are you?",
"active_skill": "program_y",
"debug_output": [
{
"skill_name": "harvesters_maintenance_skill",
"annotations": {
"emotion_classification": [
{
"anger": 0.46746790409088135,
"fear": 0.3528013229370117,
"joy": 0.3129902184009552,
"love": 0.2804321050643921,
"sadness": 0.35413244366645813,
"surprise": 0.19576209783554077,
"neutral": 0.9979490041732788
}
]
},
"text": "I don't have this information.",
"confidence": 0.5
},
{
"skill_name": "program_y",
"annotations": {
"emotion_classification": [
{
"anger": 0.38495343923568726,
"fear": 0.22263416647911072,
"joy": 0.4415707588195801,
"love": 0.4192220866680145,
"sadness": 0.21526440978050232,
"surprise": 0.19943127036094666,
"neutral": 0.998478353023529
}
]
},
"text": "Hello, I'm a lunar assistant Deepy! How are you?",
"confidence": 0.98,
"ssml_tagged_text": "Hello, I'm a lunar assistant Deepy! How are you?"
}
],
"human_utt_annotations": {
"sentseg": {
"punct_sent": "hello!",
"segments": [
"hello!"
]
},
"spelling_preprocessing": "hello!"
}
}
Deepy is, like all software, under development. However, you can pick any of the distributions (as well call them) from /assistant_dists
directory to use it in your own system. We use one of these configs (currently /assistant_dists/deepy_ai_adv/
) on our Demo Web Site. Here are some of the [features] of the current system. Development is ongoing and we hope you will join the community and help out.
Nearly all the documentation is in this wiki. This is a collection of pages with information on many topics relating to Deepy. When you are starting out, you should consult it often. Once you become more experienced, you can edit it, updating pages or adding new ones, just like Wikipedia.
Our team has made some videos about Deepy. You can watch them on YouTube as follow.
- Deepy 3000 Demo: Build Your Own Moonbase A.I. Assistant with DeepPavlov Dream!
- DeepPavlov Community Call #1
- DeepPavlov Community Call #2
Deepy has appeared on the Web in various places. Here are the references: