Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"transformers can use meaningless filler tokens (e.g., '......') in place of a chain of thought to solve two hard algorithmic tasks" Let's Think Dot by Dot. #815

Open
1 task
ShellLM opened this issue Apr 28, 2024 · 1 comment
Labels
llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets New-Label Choose this option if the existing labels are insufficient to describe the content accurately Papers Research papers prompt-engineering Developing and optimizing prompts to efficiently use language models for various applications and re

Comments

@ShellLM
Copy link
Collaborator

ShellLM commented Apr 28, 2024

Exploring the Impact of Filler Tokens on Language Model Performance in Algorithmic Tasks

Snippet: "Chain-of-thought responses from language models improve performance across most benchmarks. However, it remains unclear to what extent these performance gains can be attributed to human-like task decomposition or simply the greater computation that additional tokens allow. We show that transformers can use meaningless filler tokens (e.g., '......') in place of a chain of thought to solve two hard algorithmic tasks they could not solve when responding without intermediate tokens. However, we find empirically that learning to use filler tokens is difficult and requires specific, dense supervision to converge. We also provide a theoretical characterization of the class of problems where filler tokens are useful in terms of the quantifier depth of a first-order formula. For problems satisfying this characterization, chain-of-thought tokens need not provide information about the intermediate computational steps involved in multi-token computations. In summary, our results show that additional tokens can provide computational benefits independent of token choice. The fact that intermediate tokens can act as filler tokens raises concerns about large language models engaging in unauditable, hidden computations that are increasingly detached from the observed chain-of-thought tokens."

URL: http://export.arxiv.org/abs/2404.15758

Suggested labels

{'label-name': 'Transformer Language Models', 'label-description': 'Focuses on hidden computation in Transformer language models', 'confidence': 63.89}

@ShellLM ShellLM added New-Label Choose this option if the existing labels are insufficient to describe the content accurately Papers Research papers labels Apr 28, 2024
@ShellLM
Copy link
Collaborator Author

ShellLM commented Apr 28, 2024

Related content

#715 similarity score: 0.87
#728 similarity score: 0.87
#655 similarity score: 0.86
#332 similarity score: 0.86
#774 similarity score: 0.86
#333 similarity score: 0.86

@irthomasthomas irthomasthomas changed the title Exploring the Impact of Filler Tokens on Language Model Performance in Algorithmic Tasks "transformers can use meaningless filler tokens (e.g., '......') in place of a chain of thought to solve two hard algorithmic tasks" Let's Think Dot by Dot. Apr 28, 2024
@irthomasthomas irthomasthomas added llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets prompt-engineering Developing and optimizing prompts to efficiently use language models for various applications and re labels May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets New-Label Choose this option if the existing labels are insufficient to describe the content accurately Papers Research papers prompt-engineering Developing and optimizing prompts to efficiently use language models for various applications and re
Projects
None yet
Development

No branches or pull requests

2 participants