You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As part of Milestone 5 we implemented an Affordance Learning module that attempts to ground affordances to observable features in the given scene. As part of this process we discovered that some features may not align to well represented visual features and rather may be experimental or learned. To account for this we've proposed a second way of storing and retrieving affordances where the learner stores such affordances in a mapping to a named concept.
Technical Implementation
This new affordance learning will take the form of a map of concept tokens (strings) to affordances. Additionally a method to query the affordance learner for affordances learned of a given object will be made. Both observable and experimental affordances will be stored in this mapping.
Out of Scope
One could see a future version of this system linking with the semantic learner enabling a consolidation of affordances onto a higher-level descriptor node where 'this semantic node's affordances are true for all* it's children nodes.
Additionally out of scope, but plausibly interesting, is using learned affordances to question if they apply to other object concepts without seeing the situation. E.g. querying a domain expert (or maybe even just a google search) to try and see if an affordance applies to a given concept.
The text was updated successfully, but these errors were encountered:
Didn't see this above, so reminder to ourselves: We also want a way to query for object concepts that have a given affordance, e.g. "can be eaten". We need this for the backfilling experiment (writeup TBD 🙃).
As part of Milestone 5 we implemented an Affordance Learning module that attempts to ground affordances to observable features in the given scene. As part of this process we discovered that some features may not align to well represented visual features and rather may be experimental or learned. To account for this we've proposed a second way of storing and retrieving affordances where the learner stores such affordances in a mapping to a named concept.
Technical Implementation
This new affordance learning will take the form of a map of concept tokens (strings) to affordances. Additionally a method to query the affordance learner for affordances learned of a given object will be made. Both observable and experimental affordances will be stored in this mapping.
Out of Scope
One could see a future version of this system linking with the semantic learner enabling a consolidation of affordances onto a higher-level descriptor node where 'this semantic node's affordances are true for all* it's children nodes.
Additionally out of scope, but plausibly interesting, is using learned affordances to question if they apply to other object concepts without seeing the situation. E.g. querying a domain expert (or maybe even just a google search) to try and see if an affordance applies to a given concept.
The text was updated successfully, but these errors were encountered: