Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task: catalog remaining wiki documentation and write it #45

Open
apoch opened this issue Jan 28, 2018 · 4 comments
Open

Task: catalog remaining wiki documentation and write it #45

apoch opened this issue Jan 28, 2018 · 4 comments
Assignees
Labels

Comments

@apoch
Copy link
Owner

apoch commented Jan 28, 2018

A number of hint links in the UI point to non-existent wiki documentation. There are also pages on the existing wiki that don't link to anything because the thing they should link to has not yet been written.

Do some writing!

@apoch apoch added the task label Jan 28, 2018
@apoch apoch self-assigned this Jan 28, 2018
@apoch
Copy link
Owner Author

apoch commented Feb 3, 2018

Pages that need writing:

  • Naming conventions for considerations
  • Creating behaviors
  • Designing inputs
  • Scenarios
  • Guided tour
  • Archetypes
  • Behavior Sets

Plus do a once-over pass to link together pages as appropriate, and make sure everything is edited/proofread.

@progmars
Copy link

progmars commented Jul 6, 2018

I vote for the Guided tour - it would be awesome to pick a simple scenario and describe how to setup the AI for it and what can or cannot be done at this point with the tool.

I attempted to implement a scenario where one agent is moving from a distance to a point in space and another agent is following the first one, but tries to keep some distance from it.

As I understood, I need one location (let's call it Home), two agents (let's call one HomeRunner and the other one Spy) and then set up two archetypes to use for both of them. I created also behaviors and inputs and behavior sets.

And when I ran it, HomeRunner indeed went to Home.

But I was puzzled - hey, how did the agent knew where to run? I didn't give it any target and didn't also add any criteria for picking one!

When looking the code for ChooseBehavior, it seems, that it collects every target on the scene and creates a new context for every behavior and every target, and then runs them through score evaluator and finally picks the top scored behavior and also the winning context. So, the AI picks not only a behavior to execute but also a target for the behavior (evaluating all inputs, including target properties). It's as if every behavior is duplicated to consider it for every target in the world ignoring behaviors that have to target defined.

But in cases when there are no behaviors with targets (if I leave both "Can Target Self" and "Can Target Others" checkboxes empty), the agent reports "Stalled", even if the behavior is Custom animation, that might be independent from any context.

For example, what if I want my agent to run some animation every time when game clock reaches 12:00 ? There should be no context for that but I can't create a behavior that wouldn't require choosing a context because ChooseBehavior ignores behaviors without contexts.

Is current algorithm of selecting "winning context" how utility AI is supposed to work? What is the right way to limit the contexts available in case of a large world where I don't want to evaluate all considerations for every possible target in the world?

For example, if an AI entity is being attacked by someone, how do I tell AI to give the highest priority to this attacker target? I mean - being attacked (or requested for interactions) with someone is not quite a knowledge base item as I cannot map a target to a number for input. And there are no any properties on the target agent that could be evaluated to give it a boost when scoring behaviors.

Also, from some GDC videos I've seen ideas about modular considerations to make them reusable, but considerations in Curvature seem to be coupled to behaviors, so I'm not sure how to go about reusing them. And there have been some other utility AI patterns, like OptIn / OptOut which are not clear how to apply in Curvature (if they are supported at all).

BTW, I would like to port the Curvature core code to C++ and integrate it into my Unreal Engine "experimental sandbox". The Curvature itself at this stage could be really useful as a visual AI editor and I could load the resulting XML file into Unreal Editor (I'd have to deal with the DataContractSerializer circular references manually because Unreal's XML reader won't understand it).

@apoch
Copy link
Owner Author

apoch commented Jul 10, 2018

There's a lot more questions in there than I can address at the moment, but I did write some thoughts on how behavior selection works - https://github.com/apoch/curvature/wiki/Understanding-Behavior-Selection

Let me know if that clarifies anything!

@progmars
Copy link

Thank you. Yes, that is a good explanation of the choices and reasoning for Curvature target selection implementation. Indeed, it makes sense to evaluate targets, especially when there are many similar targets to choose from.
The only time when it might become tricky is when using behaviors that are specific to some game-world-unique target (like GoHome or GoToCityCenter).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants