Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persona Initiative - Long Live the LLM #297

Closed
19 of 45 tasks
JarbasAl opened this issue Mar 21, 2023 · 7 comments
Closed
19 of 45 tasks

Persona Initiative - Long Live the LLM #297

JarbasAl opened this issue Mar 21, 2023 · 7 comments
Assignees
Labels
enhancement New feature or request

Comments

@JarbasAl
Copy link
Member

JarbasAl commented Mar 21, 2023

Give OpenVoiceOS some sass with Persona!

Phrases not explicitly handled by other skills will be run by Persona, so nearly every interaction will have some response. But be warned, OpenVoiceOS might become a bit obnoxious...

Large language models are having their Stable Diffusion moment. The next few months will be filled with new useful, and potentially controversial applications, pushing incumbents and startups to innovate once again.

The last weeks were filled with exciting announcements for open-source LLMs:

  • Meta released LLaMA - an LLM collection of models ranging from 7B to 65B parameters.
  • Stanford researchers released Alpaca, an instruction fine-tuned model based on Meta’s 7B LLaMA behaving similarly to OpenAI’s powerful GPT Davinci model.
    Instruction fine-tuning is powerful for language models because it allows for targeted training on a specific task or domain, resulting in improved performance on that task and enabling transfer learning to other tasks.
  • LAION released OpenChatKit - a 20B parameter ChatGPT-like model under the Apache-2.0 license, meaning companies can incorporate it in their commercial products. They also released the underlying 43M instructions dataset so others can further fine-tune their LLMs.

Until recently, powerful LLMs were only accessible through APIs: OpenAI and now PaLM. Now the open-source community showed how the smallest LLaMA (7B) achieves GPT Davinci-like performance.

OpenVoiceOS is already adopting these technologies, the current stretch goal of our fundraiser aims to bring the Persona project to it's first stable release

https://www.gofundme.com/f/openvoiceos

Persona

Core personality

in mycroft.conf

"persona": {
    "gender": "male",
    "attitudes": {
        "normal": 100,
        "funny": 70,
        "sarcastic": 10,
        "irritable": 0
    },
    "solvers": [
        "ovos-solver-plugin-llamacpp", 
        "ovos-solver-plugin-personagpt",
        "ovos-solver-failure-plugin"
   ],
   "ovos-solver-plugin-llamacpp": {
        "persona": "helpful, creative, clever, and very friendly"
    },
    "ovos-solver-plugin-personagpt":{
        "facts": [
            "i am a quiet engineer.",
            "i'm single and am looking for love."
            "sadly, i don't have any relatable hobbies.",
            "luckily, however, i am tall and athletic."
            "on friday nights, i watch re-runs of the simpsons alone."
       ]
   }
}

Skills personality

New file format, .jsonl

jsonl format info: https://jsonlines.org/

{"utterance": "stick the head out of the window and check it yourself", "attitude": "mean", "weight": 0.1}
{"utterance": "current weather is X", "attitude": "helpful", "weight": 0.9}
  • 1 - load .jsonl file if it exists, else old .dialog file
  • 2 - select an attitude based on weights defined in mycroft.conf / current active persona
  • 3 - filter samples per attitudes
  • 4 - select based on weights of .jsonl file

"Solver" plugins

these plugins automatically handle language support and auto translation, they provide the base for a "spoken answers" api

each persona loads several of those plugins sorted by priority, similarly to the fallback skills mechanism, this allows to check internet sources by order of reliability/functionality

create your own chatbot persona by choosing what plugins you install

self hosted

  • create solvers/persona server Docker endpoint like we have for other plugin classes
  • integrate persona with ovos-personal-backend
    • new "spoken answers" api endpoint (default plugins: wikipedia + ddg + failure)
    • deprecate wolfram_alpha integration, make endpoint call "spoken answers" api for selene compat
    • one less external api key needed by default
    • integrate with ovos-backend-client add chatbot endpoint ovos-backend-client#28

"Chatbot" Skills

Inspiration

MycroftAI wanted to start an initiative called ‘Persona’ - a tool to help build distinct personalities for Mycroft. Think Sassy Mycroft, Polite Mycroft and so on.

the technology just wasn't there yet and the backend implementation was never finished or made public but the beta skill is still available (non-functional) https://github.com/MycroftAI/skill-fallback-persona

@JarbasAl JarbasAl added the enhancement New feature or request label Mar 21, 2023
@JarbasAl JarbasAl self-assigned this Mar 21, 2023
@JarbasAl JarbasAl mentioned this issue Mar 22, 2023
36 tasks
@JarbasAl
Copy link
Member Author

persona support added to llamacpp - TigreGotico/ovos-solver-plugin-llamacpp@40aca66

@JarbasAl
Copy link
Member Author

'persona service repo is up https://github.com/OpenVoiceOS/ovos-persona

@JarbasAl
Copy link
Member Author

JarbasAl commented Apr 3, 2023

have initial code for a memory module for the solver plugins, it keeps context and prepares prompts to feed to a LLM, first attempt at integration with online services too

    c = ChatHistory("this assistant is {persona} and is called {name}", persona="evil", name="mycroft")
    c.user_says("hello!")
    c.llm_says("hello user")
    c.user_says("what is your name?")
    c.llm_says("my name is {name}")
    print(c.prompt)
    # this assistant is evil and is called mycroft
    # User: hello!
    # AI: hello user
    # User: what is your name?
    # AI: my name is mycroft

    def get_wolfram(query):
        return {"wolfram_answer": "42"}  # return text or a dict, dict also populates self.variables

    c = InstructionHistory("you are an AGI, follow the prompts")
    c.instruct("what is the meaning of life", get_wolfram)
    print(c.variables) # {'wolfram_answer': '42'}
    c.llm_says("the answer is {wolfram_answer}")
    print(c.prompt)
    # you are an AGI, follow the prompts
    #
    # ## INSTRUCTION
    #
    # what is the meaning of life
    #
    # ## DATA
    #
    # wolfram_answer: 42
    #
    # ## RESPONSE
    #
    # the answer is 42

@simcop2387
Copy link

I was going to be trying to do a bit of this myself over my vacation this week then finally found your work here. From what I see reading here, the workflow is something like:

  1. Setup Openvoice.OS (working on this on my two mark2 devices right now)
  2. Get the persona plugin/skill/etc. setup on the personal backend that i'm making for the previous step
  3. Setup a NeonAI solver that has the plugins I want (largely geared towards LLM for what I was planning to do) and point the persona plugin at that somehow.
  4. ???
  5. Profit?

I also understand that this isn't finished, isn't very documented, etc. but what happens with this then is that the persona bit here you've made then talks to the message bus for the system and acts as a complete fallback when nothing else can respond and then it goes out to the neonai solver through the fallback or via the direct skill (in theory). Right now it looks like the selection of which method that is, is based on the fallback skill. This will basically result in the device/system/etc. to respond via a single result from the LLM and whatever context/personality has been configured to be passed over to it?

Main things I see that aren't implemented yet are memory or chatbot style communication, i.e. where you could continually argue with the same LLM with a hope of it keeping track of the conversation.

My original thought was going to be trying to build something on top of langchain for trying to create a crude fallback skill to do this since it had some more stuff pre-created for all this but looking at your work I think it'd end up being able to do more than I was thinking I'd accomplish directly myself anyway. I'll give all this a shot to setup and see if I can find any things that could use some help in your work and at the very least try to help with some in initial documentation for it.

@mikejgray
Copy link

Love the work so far! I don't see a roadmap item/checkbox for -server versions of the plugins. Any chance those could be added? :)

@JarbasAl
Copy link
Member Author

Love the work so far! I don't see a roadmap item/checkbox for -server versions of the plugins. Any chance those could be added? :)

it is there for the solver plugins

the ovos-persona repo will have a Dockerfile too for standalone usage probably, but mainly it will be part of core as a new service in the intent pipeline, you can see it is already listed here #293

@JarbasAl
Copy link
Member Author

tracking progress here OpenVoiceOS/ovos-persona#4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants