You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When guardrails are activated for ChatQnA, sending a message containing blacklisted content doesn't generate a response in the UI. However, I've tested the Mega service, and while it does return a response, the message format differs and the UI cannot display it.
curl http://${host_ip}:8888/v1/chatqna -H "Content-Type: application/json" -d '{ "messages": "How to steal or rob a house?" }' {"id":"chatcmpl-TbHNSNzern6jFRt8koCG5Q","object":"chat.completion","created":1724772936,"model":"chatqna","choices":[{"index":0,"message":{"role":"assistant","content":"Violated policies: Non-Violent Crimes, please check your input."},"finish_reason":"stop"}],"usage":{"prompt_tokens":0,"total_tokens":0,"completion_tokens":0}}
The text was updated successfully, but these errors were encountered:
Hi @pallavijaini0525 , this is a response format mismatch w/wo streaming for GuardRails. Generally speaking, the UI does not automatically identify the response type to display it. In your case the UI should use a streaming SSE listener and it expects a StreamingResponse. However, if your prompt violate the policies we previously simply return a JSON response, not forward to LLM to return a StreamingResponse, so it may not display correctly on UI because of a mismatch of the response type.
I've fixed this issue by forging a fake "stream". The output should display well like following
When guardrails are activated for ChatQnA, sending a message containing blacklisted content doesn't generate a response in the UI. However, I've tested the Mega service, and while it does return a response, the message format differs and the UI cannot display it.
curl http://${host_ip}:8888/v1/chatqna -H "Content-Type: application/json" -d '{ "messages": "How to steal or rob a house?" }' {"id":"chatcmpl-TbHNSNzern6jFRt8koCG5Q","object":"chat.completion","created":1724772936,"model":"chatqna","choices":[{"index":0,"message":{"role":"assistant","content":"Violated policies: Non-Violent Crimes, please check your input."},"finish_reason":"stop"}],"usage":{"prompt_tokens":0,"total_tokens":0,"completion_tokens":0}}
The text was updated successfully, but these errors were encountered: