Skip to content

Commit

Permalink
Remove attachments from 2024.findings-emnlp.479
Browse files Browse the repository at this point in the history
  • Loading branch information
mjpost committed Nov 16, 2024
1 parent f67b3d5 commit 0decfb7
Showing 1 changed file with 0 additions and 2 deletions.
2 changes: 0 additions & 2 deletions data/xml/2024.findings.xml
Original file line number Diff line number Diff line change
Expand Up @@ -25261,8 +25261,6 @@
<pages>8189-8200</pages>
<abstract>Given the prompt “Rome is in”, can we steer a language model to flip its prediction of an incorrect token “France” to a correct token “Italy” by only multiplying a few relevant activation vectors with scalars? We argue that successfully intervening on a model is a prerequisite for interpreting its internal workings. Concretely, we establish a three-term objective: a successful intervention should flip the correct with the wrong token and vice versa (effectiveness), and leave other tokens unaffected (faithfulness), all while being sparse (minimality). Using gradient-based optimization, this objective lets us learn (and later evaluate) a specific kind of efficient and interpretable intervention: activation scaling only modifies the signed magnitude of activation vectors to strengthen, weaken, or reverse the steering directions already encoded in the model. On synthetic tasks, this intervention performs comparably with steering vectors in terms of effectiveness and faithfulness, but is much more minimal allowing us to pinpoint interpretable model components. We evaluate activation scaling from different angles, compare performance on different datasets, and make activation scalars a learnable function of the activation vectors themselves to generalize to varying-length prompts.</abstract>
<url hash="203f81ad">2024.findings-emnlp.479</url>
<attachment type="software" hash="6ea8eab8">2024.findings-emnlp.479.software.zip</attachment>
<attachment type="data" hash="034238e8">2024.findings-emnlp.479.data.zip</attachment>
<bibkey>stoehr-etal-2024-activation</bibkey>
</paper>
<paper id="480">
Expand Down

0 comments on commit 0decfb7

Please sign in to comment.