Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update 2024.acl-short.40 in 2024.acl.xml #3762

Merged
merged 1 commit into from
Aug 16, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions data/xml/2024.acl.xml
Original file line number Diff line number Diff line change
Expand Up @@ -10446,13 +10446,13 @@
</paper>
<paper id="40">
<title>Sign Language Translation with Sentence Embedding Supervision</title>
<author><first>Hamidullah</first><last>Yasser</last></author>
<author><first>Josef</first><last>Genabith</last><affiliation>German Research Center for AI and Universität des Saarlandes</affiliation></author>
<author><first>Yasser</first><last>Hamidullah</last></author>
<author><first>Josef</first><last>van Genabith</last><affiliation>German Research Center for AI and Universität des Saarlandes</affiliation></author>
<author><first>Cristina</first><last>España-Bonet</last><affiliation>German Research Center for AI</affiliation></author>
<pages>425-434</pages>
<abstract>State-of-the-art sign language translation (SLT) systems facilitate the learning process through gloss annotations, either in an end2end manner or by involving an intermediate step. Unfortunately, gloss labelled sign language data is usually not available at scale and, when available, gloss annotations widely differ from dataset to dataset. We present a novel approach using sentence embeddings of the target sentences at training time that take the role of glosses. The new kind of supervision does not need any manual annotation but it is learned on raw textual data. As our approach easily facilitates multilinguality, we evaluate it on datasets covering German (PHOENIX-2014T) and American (How2Sign) sign languages and experiment with mono- and multilingual sentence embeddings and translation systems. Our approach significantly outperforms other gloss-free approaches, setting the new state-of-the-art for data sets where glosses are not available and when no additional SLT datasets are used for pretraining, diminishing the gap between gloss-free and gloss-dependent systems.</abstract>
<url hash="237e48a7">2024.acl-short.40</url>
<bibkey>yasser-etal-2024-sign</bibkey>
<bibkey>hamidullah-etal-2024-sign</bibkey>
</paper>
<paper id="41">
<title><fixed-case>STREAM</fixed-case>: Simplified Topic Retrieval, Exploration, and Analysis Module</title>
Expand Down
Loading