Skip to content
This repository has been archived by the owner on Sep 8, 2024. It is now read-only.

Feat/message targetting #2461

Merged
merged 11 commits into from
Feb 15, 2020
Merged

Feat/message targetting #2461

merged 11 commits into from
Feb 15, 2020

Conversation

JarbasAl
Copy link
Contributor

@JarbasAl JarbasAl commented Jan 22, 2020

I could swear i made this PR before, but could not find it so here it goes again.

This is needed for the hive mind

Description

The speech handling now checks the message context if it's the intended target for the message and will only speak in the following conditions:

  • Explicitly targeted i.e. the destination is "audio"
  • destination is set to None
  • destination is missing completely

The idea is that for example when the android app is used to access Mycroft the device at home shouldn't start to speak.

Targeting Theory

The context target parameter in the original message can be set to list with any number of intended targets:

bus.emit(Message('recognizer_loop:utterance', data, 
				 context={'destination': ['audio', 'kde'],
						  'source': "remote_service"))

A missing destination or if the destination is set to None is interpreted as a multicast and should trigger all output capable processes (be it the mycroft-audio process, a web-interface, the KDE plasmoid or maybe the android app)

Messages

  • new message.forward method, keeps previous context.
    • message continues going to same destination
  • message.reply method now swaps "source" with "destination"
    • message goes back to source

Inside mycroft-core

  • stt / cli will use "audio" or "debug_cli" as source
  • intent service will .reply to utterance message
  • all skill/intent service messages are .forward (from previous intent service .reply)
  • cli is for debug, will print all utterances
  • audio will check if "audio" or "debug_cli" is the destination

How to test

  • STT utterances trigger TTS
  • Cli utterances trigger TTS
  • Remote Clients do not trigger TTS

@JarbasAl JarbasAl added the Type: Enhancement - proposed New proposal for a feature that is not currently a priority on the roadmap. label Jan 22, 2020
@pep8speaks
Copy link

pep8speaks commented Jan 22, 2020

Hello @JarbasAl! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-02-15 15:34:49 UTC

@devs-mycroft devs-mycroft added the CLA: Yes Contributor License Agreement exists (see https://github.com/MycroftAI/contributors) label Jan 22, 2020
@JarbasAl
Copy link
Contributor Author

deprecates #2118

@JarbasAl
Copy link
Contributor Author

Hello @JarbasAl! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! beers

Comment last updated at 2020-01-22 19:02:49 UTC

you are wrong pep8 bot

travis says

mycroft/skills/fallback_skill.py:65:36: E128 continuation line under-indented for visual indent
mycroft/skills/fallback_skill.py:89:44: E128 continuation line under-indented for visual indent

@forslund
Copy link
Collaborator

Yay this I can review and include :)

Copy link
Collaborator

@forslund forslund left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pep-8 things need to be fixed. I do think the CLI by default should show all interactions since it's used for debugging. Maybe an option to disable that? CLI should also produce audio by default.

Will look into the details later this week/weekend but it looks like the old PR but a bit more advanced. 👍

mycroft/skills/fallback_skill.py Outdated Show resolved Hide resolved
mycroft/skills/fallback_skill.py Outdated Show resolved Hide resolved
@JarbasAl
Copy link
Contributor Author

JarbasAl commented Jan 22, 2020

I do think the CLI by default should show all interactions since it's used for debugging. Maybe an option to disable that? CLI should also produce audio by default.

since this only makes sense (in my opinion) when debugging, why not simply check for the "debug" flag already present in the .conf ?

if i'm writing a query i'm also reading the response, if I'm speaking i'm not reading... unless we are debugging like you said

other option would be to identify the source in the cli interface itself, something like


[cli] some spoken message
[speech] some spoken message

@forslund forslund mentioned this pull request Jan 24, 2020
@forslund
Copy link
Collaborator

Tested and seems to be working as intended. The code looks excellent from what I've seen (apart from the pep issues)

I want the cli to view the utterances from the speech client and utterance entered on the cli to be outputted as audio. The CLI is first and foremost intended as a debug tool and removing this makes it a lot less useful.

It looks to me like the changes to the Message.reply() method should be backwards compatible so we don't need to hold off waiting for the 20.02 release. (Although this thursdays release may be the last of 19.08.)



def handle_no_internet():
LOG.debug("Notifying enclosure of no internet connection")
bus.emit(Message('enclosure.notify.no_internet'))
context = {'client_name': 'mycroft_listener',
'source': 'audio'}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very minor nitpick, and not used in practice, but should i set destination here as "enclosure"?

@JarbasAl
Copy link
Contributor Author

JarbasAl commented Feb 2, 2020

when using self.speak inside a scheduled event the message.context was lost, added the option to pass the message context

this allows things like common_XXX to forward messages correctly MycroftAI/skill-query#8 MycroftAI/skill-playback-control#28

@forslund
Copy link
Collaborator

forslund commented Feb 5, 2020

Ok I have some weirdness on my Desktop machine, seem to work on my laptop but I get no speech on my desktop. Will dig and see if I have some weird config set.

event.context = event.context or {}
if event.context.get('destination') and \
('debug_cli' in event.context['destination'] or
'audio' in event.context['destination']):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be

    if event.context.get('destination') and \
            ('debug_cli' not in event.context['destination'] or
             'audio' not in event.context['destination']):

That change seems to make the PR work for me.

@MycroftAI MycroftAI deleted a comment from NeonJarbas Feb 6, 2020
@forslund
Copy link
Collaborator

forslund commented Feb 9, 2020

This looks good now I think. Do you think you could rebase it and squash the pep-8 fixes and the logic fixes into the relevant main commits to clean up the commit history a bit?

@forslund
Copy link
Collaborator

Rebased, and ready to merge! Thanks @JarbasAI, awesome work!

@pcwii
Copy link

pcwii commented Apr 20, 2020

@JarbasAI @JarbasAl, Does this PR add the ability to know what skill originated the message? I think this would be handy also for filtering / directing intents to / from other devices. I am looking for this functionality in my Mesh-Skill. Thanks for all the hard work!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CLA: Yes Contributor License Agreement exists (see https://github.com/MycroftAI/contributors) Type: Enhancement - proposed New proposal for a feature that is not currently a priority on the roadmap.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants