Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Starting with 2.1.13 streamed responses not rendered anymore - works fine with 2.1.12 #260

Closed
andnig opened this issue Jul 1, 2023 · 12 comments

Comments

@andnig
Copy link

andnig commented Jul 1, 2023

When using the exact sample application as provided in the docs, chat messages are rendered as expected with version 2.1.12. However changing the package to 2.1.13 does not render the messages anymore.
The requests work fine and responses are coming back to application (seen in chrome devtools), but the messages are not rendered anymore.

'use client'

import { useChat } from 'ai/react'

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat()

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          {m.role}: {m.content}
        </div>
      ))}

      <form onSubmit={handleSubmit}>
        <label>
          Say something...
          <input
            value={input}
            onChange={handleInputChange}
          />
        </label>
      </form>
    </div>
  )
}
import { OpenAIStream, StreamingTextResponse } from 'ai'
import { Configuration, OpenAIApi } from 'openai-edge'

// Create an OpenAI API client (that's edge friendly!)
const config = new Configuration({
  apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(config)
 
// IMPORTANT! Set the runtime to edge
export const runtime = 'edge'
 
export async function POST(req: Request) {
  // Extract the `messages` from the body of the request
  const { messages } = await req.json()
 
  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages
  })
  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response)
  // Respond with the stream
  return new StreamingTextResponse(stream)
}
@e-roy
Copy link

e-roy commented Jul 1, 2023

Here to confirm the same with my results. Here's my code using langchain

import { LangChainStream, Message, StreamingTextResponse } from "ai";
import { CallbackManager } from "langchain/callbacks";
import { ChatOpenAI } from "langchain/chat_models/openai";
import {
  AIChatMessage,
  HumanChatMessage,
  SystemChatMessage,
} from "langchain/schema";

export const runtime = "edge";

export async function POST(req: Request) {
  const { messages, model, temperature } = await req.json();

  console.log("messages: ", messages);

  const { stream, handlers } = LangChainStream();

  const llm = new ChatOpenAI({
    modelName: "gpt-3.5-turbo",
    streaming: true,
    callbackManager: CallbackManager.fromHandlers(handlers),
  });

  const chatMessages = (messages as Message[])
    .map((m) => {
      if (m.role === "user") {
        return new HumanChatMessage(m.content);
      }
      if (m.role === "assistant") {
        return new AIChatMessage(m.content);
      }
      if (m.role === "system") {
        return new SystemChatMessage(m.content);
      }
      return undefined; // Default case
    })
    .filter(
      (message): message is Exclude<typeof message, undefined> =>
        message !== undefined
    );

  llm.call(chatMessages).catch(console.error);

  return new StreamingTextResponse(stream);
}

What I'm seeing is the message array in the frontend stays empty
useEffect(() => { console.log("messages", messages); }, [messages]);

This will result in a repeat stream of messages []

I am also seeing a response in devtools

@holdenmatt
Copy link

Yep, also seeing this (seems to affect both the chat and completion APIs).

The examples in the docs here don't work with 2.1.13:
https://sdk.vercel.ai/docs/getting-started
https://sdk.vercel.ai/docs/guides/openai

Works after downgrading to 2.1.12.

@jrysana
Copy link

jrysana commented Jul 1, 2023

Saw the same issue as well

@jrysana
Copy link

jrysana commented Jul 1, 2023

Seems like this commit is probably the issue, since it's the only commit with actual source code changed between 2.1.12 (works fine) and 2.1.13 (broken hook) fd82961

@cuevaio
Copy link

cuevaio commented Jul 2, 2023

Same issue. The returned value completion of useCompletion is an empty string:

{ completion: "" }

@mavic111
Copy link

mavic111 commented Jul 2, 2023

Same issue with huggingface

@kerkkoh
Copy link

kerkkoh commented Jul 2, 2023

I thought I had gone insane when all of the sudden yesterday everything just didn't work. Glad it wasn't just me.

Here's a quick fix/revert that fixes it (React/Next.js, but same idea for other frameworks, just provide an id, you can check the old id generation from this commit fd82961):

import { useId } from 'react';
import { UseChatOptions, useChat } from 'ai/react'

const useChatWrapper = (options?: UseChatOptions) => {
  const id = useId();
  const chat = useChat({...(options ?? {}), id})
  return chat;
};

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChatWrapper()
  ....
}

@willscottrod
Copy link

I thought I had gone insane when all of the sudden yesterday everything just didn't work. Glad it wasn't just me.

Here's a quick fix/revert that fixes it (React/Next.js, but same idea for other frameworks, just provide an id, you can check the old id generation from this commit fd82961):

import { useId } from 'react';
import { UseChatOptions, useChat } from 'ai/react'

const useChatWrapper = (options?: UseChatOptions) => {
  const id = useId();
  const chat = useChat({...(options ?? {}), id})
  return chat;
};

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChatWrapper()
  ....
}

This is amazing, this fixed it. I've been trying to fix this for the last 2 hours and was almost losing my mind.

@anhhtca
Copy link

anhhtca commented Jul 2, 2023

quoted...

Amazing ! It's worked.

@kerkkoh
Copy link

kerkkoh commented Jul 2, 2023

For anyone wanting to actually fix this with a PR and still use nanoid-based ids, change it so that the id-generating function is not called on each render as it currently clearly is. Likely this was missed during testing as I assume they have an id that they provide to useChat in the tests.

My fix above is just one way of ensuring there's an id that does not change. You can also just use useChat({ id: "hello world" }) and it will work just the same, as in:

import { useChat } from 'ai/react'

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat({ id: "hello world" })
  ....
}

@jaredpalmer
Copy link
Collaborator

This has been resolved. #264 . Sorry about that!

@kerkkoh
Copy link

kerkkoh commented Jul 2, 2023

@jaredpalmer I'm not sure if there was an actual reason for using nanoid based ids, but if there was, at least in react, just placing the id value in the persistent state to avoid running it on each render should suffice, i.e.

import {useState} from 'react'
...
const [chatId,] = useState(id || `chat-${nanoid()}`)

Again, not sure whether there was a reason for the change, and I am not an expert in svelte nor vue, but I assume they have similar mechanisms to avoid such issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants