Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prompt engineering: dialog writing and disagreement #834

Open
1 task
ShellLM opened this issue May 15, 2024 · 1 comment
Open
1 task

Prompt engineering: dialog writing and disagreement #834

ShellLM opened this issue May 15, 2024 · 1 comment
Labels
AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models chat-templates llm prompt templates for chat models human-verified <INST>NEVER PICK THIS LABEL</INST> llm Large Language Models prompt Collection of llm prompts and notes prompt-engineering Developing and optimizing prompts to efficiently use language models for various applications and re

Comments

@ShellLM
Copy link
Collaborator

ShellLM commented May 15, 2024

I find this is fairly easy to do by making both sides of the disagreement third-...

Snippet

Hacker News new | past | comments | ask | show | jobs | submit login

ianbicking 11 hours ago | parent | context | favorite | on: GPT-4o

I find this is fairly easy to do by making both sides of the disagreement third-person and prompting it as a dialog writing exercise. This is akin to how GPT-3 implemented chat. So you do something like:
You will be helping the user write a dialog between two characters,
Mr Contrarian and Mr Know-It-All. The user will write all the dialog
for Mr Know-It-All and you will write for Mr Contrarian.

Mr Contrarian likes to disagree. He tries to hide it by inventing
good rationales for his argument, but really he just wants to get
under Mr Know-It-All's skin.

Write your dialog like:
  <mr-contrarian>I disagree with you strongly!</mr-contrarian>

Below is the transcript...

And then user input is always giving like:
Hi there
(Always wrapped in tags, never bare input which will be confused for a directive.)
I haven't tested this exact prompt, but the general pattern works well for me. (I write briefly about some of these approaches here: https://ianbicking.org/blog/2024/04/roleplaying-by-llm#simpl...)

Suggested labels

None

@ShellLM ShellLM added AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models chat-templates llm prompt templates for chat models llm Large Language Models prompt Collection of llm prompts and notes labels May 15, 2024
@ShellLM
Copy link
Collaborator Author

ShellLM commented May 15, 2024

Related content

#832 similarity score: 0.86
#369 similarity score: 0.86
#828 similarity score: 0.85
#654 similarity score: 0.85
#551 similarity score: 0.84
#156 similarity score: 0.84

@irthomasthomas irthomasthomas changed the title I find this is fairly easy to do by making both sides of the disagreement third-... | Hacker News Prompt engineering: dialog writing and disagreement May 15, 2024
@irthomasthomas irthomasthomas added prompt-engineering Developing and optimizing prompts to efficiently use language models for various applications and re human-verified <INST>NEVER PICK THIS LABEL</INST> labels May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models chat-templates llm prompt templates for chat models human-verified <INST>NEVER PICK THIS LABEL</INST> llm Large Language Models prompt Collection of llm prompts and notes prompt-engineering Developing and optimizing prompts to efficiently use language models for various applications and re
Projects
None yet
Development

No branches or pull requests

2 participants