Models and data generators toward developing the capacity to improve users' ability to self-regulate cognitive and emotional states.
View docs at docs.cl4rify.org.
At the highest level, the above is to be accomplished by way of a feedback loop whereby a user's state is inferred from one or more modalities and a feedback action is chosen to aid in this task (e.g. play a tone when the user is looking stressed or distracted). For the time being this repository focuses on the state modeling aspect of this (excluding action selection). An intermediate waypoint of the project is to attain the ability to perform beyond state-of-the-art emotion and cognitive state recognition from video, audio, and neural-sensing modalities.
Dataset | Modalities | Example problem | Status | Code link |
---|---|---|---|---|
VoxCeleb2 | Video, audio | Learn embeddings that are similar for co-occurring signals | WIP | here |
FEC | Image | Learn image embeddings from facial expression similarity data. | WIP | here |
Model | Description | Status | Code link |
---|---|---|---|
multi_transformer | An experiment with integrating a heterogeneous collection of co-occurring modalities toward enhancing (1) prediction of facial expression labels, and (2) prediction of future game performance. | WIP | here |
img2img_adv | Adversarial img2img translation model, for aiding development of adversarial training code. | WIP | here |