Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: sync chat_ctx for openai RealtimeModel from and to the remote realtime session #1015

Merged
merged 20 commits into from
Nov 12, 2024

Conversation

longcw
Copy link
Collaborator

@longcw longcw commented Oct 31, 2024

  • write transcripts to chat context
  • update realtime session based on input chat_ctx
  • add sync_chat_ctx to sync the local chat context to OAI realtime session

Copy link

changeset-bot bot commented Oct 31, 2024

🦋 Changeset detected

Latest commit: dd74f0f

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 2 packages
Name Type
livekit-plugins-openai Minor
livekit-agents Minor

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@longcw longcw force-pushed the fix/write-transcripts-to-context branch from c23d7a9 to 03fd0ee Compare November 2, 2024 07:19
@longcw longcw changed the title feat: write transcripts to context and update realtime session based on input chat_ctx feat: sync chat_ctx for multimodal agents from and to the realtime session Nov 2, 2024
@longcw longcw changed the title feat: sync chat_ctx for multimodal agents from and to the realtime session [draft] feat: sync chat_ctx for multimodal agents from and to the realtime session Nov 3, 2024
@longcw
Copy link
Collaborator Author

longcw commented Nov 3, 2024

Some known issues to be solved:

  1. The user speaking at the beginning before the sync is finished may cause an error and the agent hangs with ERROR livekit.plugins.openai.realtime - OpenAI S2S error {'type': 'error', 'event_id': 'event_APLDybDWwpjHeB4rVohDV', 'error': {'type': 'invalid_request_error', 'code': None, 'message': 'Only model output audio messages can be truncated', 'param': None, 'event_id': None}}
  2. The function calls are also stored as conversation items on OAI, for now the local chat_ctx only has the chat items that may mismatch with the remote version when there are function calls.

@davidzhao
Copy link
Member

  1. The user speaking at the beginning before the sync is finished may cause an error and the agent hangs with ERROR livekit.plugins.openai.realtime - OpenAI S2S error {'type': 'error', 'event_id': 'event_APLDybDWwpjHeB4rVohDV', 'error': {'type': 'invalid_request_error', 'code': None, 'message': 'Only model output audio messages can be truncated', 'param': None, 'event_id': None}}

For this issue, could we address by ignoring user input during the sync? how long does that take typically in your tests?

@longcw
Copy link
Collaborator Author

longcw commented Nov 4, 2024

For this issue, could we address by ignoring user input during the sync? how long does that take typically in your tests?

Yes, I think so. The sync takes just 1-2 seconds on my end.
Perhaps it's related to the truncate logic in the agent, the following is the only place the agent calls item.truncate. I'll try to understand the root cause and address it.

        @self._session.on("input_speech_started")
        def _input_speech_started():
            self.emit("user_started_speaking")
            self._update_state("listening")
            if self._playing_handle is not None and not self._playing_handle.done():
                self._playing_handle.interrupt()

                self._session.conversation.item.truncate(
                    item_id=self._playing_handle.item_id,
                    content_index=self._playing_handle.content_index,
                    audio_end_ms=int(self._playing_handle.audio_samples / 24000 * 1000),
                )

@longcw
Copy link
Collaborator Author

longcw commented Nov 4, 2024

Some known issues to be solved:

  1. The user speaking at the beginning before the sync is finished may cause an error and the agent hangs with ERROR livekit.plugins.openai.realtime - OpenAI S2S error {'type': 'error', 'event_id': 'event_APLDybDWwpjHeB4rVohDV', 'error': {'type': 'invalid_request_error', 'code': None, 'message': 'Only model output audio messages can be truncated', 'param': None, 'event_id': None}}
  2. The function calls are also stored as conversation items on OAI, for now the local chat_ctx only has the chat items that may mismatch with the remote version when there are function calls.

The first problem I found with the agent hanging was because the real-time API responded with text instead of audio in this case. In some cases the API will only output in text if a text chat context is set.

Some messages I received when this happened:

received WSMessage(type=<WSMsgType.TEXT: 1>, data='{"type":"response.text.delta","event_id":"event_APlzgvMN6oippCHur2YKD","response_id":"resp_APlzfiKfj4UAJHFskFcHd","item_id":"item_APlzfSORFWckGGCUgaNCg","output_index":0,"content_index":0,"delta":" How"}', extra='')
received WSMessage(type=<WSMsgType.TEXT: 1>, data='{"type":"response.text.delta","event_id":"event_APlzgjnpD7buTPItBG2XQ","response_id":"resp_APlzfiKfj4UAJHFskFcHd","item_id":"item_APlzfSORFWckGGCUgaNCg","output_index":0,"content_index":0,"delta":" are"}', extra='')
received WSMessage(type=<WSMsgType.TEXT: 1>, data='{"type":"response.text.delta","event_id":"event_APlzg67kWskfiC2Iijy4Z","response_id":"resp_APlzfiKfj4UAJHFskFcHd","item_id":"item_APlzfSORFWckGGCUgaNCg","output_index":0,"content_index":0,"delta":" you"}', extra='')
received WSMessage(type=<WSMsgType.TEXT: 1>, data='{"type":"response.text.delta","event_id":"event_APlzgLiTHPW6ctfJm7W68","response_id":"resp_APlzfiKfj4UAJHFskFcHd","item_id":"item_APlzfSORFWckGGCUgaNCg","output_index":0,"content_index":0,"delta":"?"}', extra='')
received WSMessage(type=<WSMsgType.TEXT: 1>, data='{"type":"response.text.done","event_id":"event_APlzgNY8o4MYrIRjeHZYk","response_id":"resp_APlzfiKfj4UAJHFskFcHd","item_id":"item_APlzfSORFWckGGCUgaNCg","output_index":0,"content_index":0,"text":"Hello! I\'m doing well, thank you. How are you?"}', extra='')

I disabled the audio playout in (this commit) if the response is a text to avoid forever await on the audio buffer, but when the API outputs a text it will always output in text. Do you have any idea about this? @theomonnom @davidzhao

It seems that this is a common issue of the realtime API https://community.openai.com/t/realtime-api-no-response-audio-or-audio-deltas-despite-modalities-being-set-to-audio-text/991062

livekit-agents/livekit/agents/utils/message_change.py Outdated Show resolved Hide resolved
@@ -77,9 +94,19 @@ async def get_weather(
),
),
fnc_ctx=fnc_ctx,
chat_ctx=chat_ctx,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks nice!

livekit-agents/livekit/agents/utils/message_change.py Outdated Show resolved Hide resolved
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🙌

@@ -767,6 +798,35 @@ def session_update(
}
)

def _sync_chat_ctx_to_session(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think this can be an async function that waits for the OAI answers?
We're currently emitting conversation_item_deleted and conversation_item_created instead.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess conversation.item.create needs to be async?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a async version of item create.

Comment on lines 688 to 692
@chat_ctx.setter
def chat_ctx(self, chat_ctx: llm.ChatContext) -> None:
"""Sync the ctx to the session and reset the chat context."""
self._sync_chat_ctx_to_session(self._chat_ctx, chat_ctx)
self._chat_ctx = chat_ctx
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should add a setter. Instead we should have a public sync_ctx function.
I guess RealtimeModel will needs to have 2 llm.ChatContext (one with the current server states, and one that the user can edit)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I am thinking something similar that we should keep a "copy" of server state in the RealtimeModel that user cannot edit. I'll add that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created a _conversation_items (maybe give it another name later) in the realtime session to track the item created and deleted events from the API. The content of the item like user and agent transcriptions and the function call outputs will be updated to the item from the multimodal agent.

The _session.chat_ctx will be created from the tracked conversation items and user can sync a chat_ctx to the session as well.

@theomonnom
Copy link
Member

  1. The function calls are also stored as conversation items on OAI, for now the local chat_ctx only has the chat items that may mismatch with the remote version when there are function calls.

The chat_ctx does support function tools messages. E.g on how we handle it for the PipelineAgent

@theomonnom
Copy link
Member

I disabled the audio playout in (this commit) if the response is a text to avoid forever await on the audio buffer, but when the API outputs a text it will always output in text. Do you have any idea about this?

Not sure to understand what you mean by this? Is there no audio being generated?

@longcw
Copy link
Collaborator Author

longcw commented Nov 5, 2024

  1. The function calls are also stored as conversation items on OAI, for now the local chat_ctx only has the chat items that may mismatch with the remote version when there are function calls.

The chat_ctx does support function tools messages. E.g on how we handle it for the PipelineAgent

So you mean to append the function tool messages to the chat_ctx, right? That makes sense.

@longcw
Copy link
Collaborator Author

longcw commented Nov 5, 2024

I disabled the audio playout in (this commit) if the response is a text to avoid forever await on the audio buffer, but when the API outputs a text it will always output in text. Do you have any idea about this?

Not sure to understand what you mean by this? Is there no audio being generated?

No, there is no audio generated in that case, the API response in text.delta instead. Here are some example messages from the realtime API

received WSMessage(type=<WSMsgType.TEXT: 1>, data='{"type":"response.text.delta","event_id":"event_APlzgvMN6oippCHur2YKD","response_id":"resp_APlzfiKfj4UAJHFskFcHd","item_id":"item_APlzfSORFWckGGCUgaNCg","output_index":0,"content_index":0,"delta":" How"}', extra='')
received WSMessage(type=<WSMsgType.TEXT: 1>, data='{"type":"response.text.delta","event_id":"event_APlzgjnpD7buTPItBG2XQ","response_id":"resp_APlzfiKfj4UAJHFskFcHd","item_id":"item_APlzfSORFWckGGCUgaNCg","output_index":0,"content_index":0,"delta":" are"}', extra='')
received WSMessage(type=<WSMsgType.TEXT: 1>, data='{"type":"response.text.delta","event_id":"event_APlzg67kWskfiC2Iijy4Z","response_id":"resp_APlzfiKfj4UAJHFskFcHd","item_id":"item_APlzfSORFWckGGCUgaNCg","output_index":0,"content_index":0,"delta":" you"}', extra='')
received WSMessage(type=<WSMsgType.TEXT: 1>, data='{"type":"response.text.delta","event_id":"event_APlzgLiTHPW6ctfJm7W68","response_id":"resp_APlzfiKfj4UAJHFskFcHd","item_id":"item_APlzfSORFWckGGCUgaNCg","output_index":0,"content_index":0,"delta":"?"}', extra='')
received WSMessage(type=<WSMsgType.TEXT: 1>, data='{"type":"response.text.done","event_id":"event_APlzgNY8o4MYrIRjeHZYk","response_id":"resp_APlzfiKfj4UAJHFskFcHd","item_id":"item_APlzfSORFWckGGCUgaNCg","output_index":0,"content_index":0,"text":"Hello! I\'m doing well, thank you. How are you?"}', extra='')

The main branch currently doesn't check the part type of the response.content part.added message, which creates a playout instance waiting for the audio, which never comes, so the audio stream doesn't close.

But the main problem is how to make sure the API always returns audio instead of text.

@davidzhao
Copy link
Member

But the main problem is how to make sure the API always returns audio instead of text.

if the API doesn't return audio for any reason, can make it so the agent doesn't wait for playout?

@longcw
Copy link
Collaborator Author

longcw commented Nov 5, 2024

But the main problem is how to make sure the API always returns audio instead of text.

if the API doesn't return audio for any reason, can make it so the agent doesn't wait for playout?

Yes, I already added a check in this commit that it will skip the playout if it's a text response. But from what I saw, if the realtime API responded in text at the start it will always response in text in the session.

Here are some very recent discussions about this issue on OAI's community, I'll try to find if there is any workround

How can I switch from text generation to audio generation? - API - OpenAI Developer Forum

Realtime API: Did anybody managed to provide previous conversation transcript history while keeping audio answers? - API / Bugs - OpenAI Developer Forum

@longcw longcw changed the title [draft] feat: sync chat_ctx for multimodal agents from and to the realtime session feat: sync chat_ctx for multimodal agents from and to the realtime session Nov 7, 2024
_next: Optional[ConversationItem] = field(default=None, repr=False)


class ConversationItems:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a double linked list to mimic the item operation in the Realtime API. Let me know if this is necessary or if there is any other tools for easy deleting and inserting items to the list. And this may need a rename to avoid confusion with the Conversation Items in the API.

Copy link
Member

@theomonnom theomonnom Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is OK since this is internal. Maybe RemoteConversationItems?
Let's prepend '_' to the file name and classes to explicitly mark them as internal

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sound good.

Comment on lines 813 to 834
async def aitem_create(
self,
message: llm.ChatMessage,
previous_item_id: str | None = None,
_on_create_callback: Callable[[], None] | None = None,
) -> None:
fut = asyncio.Future[None]()
self._item_created_futs[message.id] = fut
self.conversation.item.create(message, previous_item_id)
if _on_create_callback:
_on_create_callback()
await fut
del self._item_created_futs[message.id]

async def aitem_delete(self, item_id: str) -> None:
fut = asyncio.Future[None]()
self._item_deleted_futs[item_id] = fut
self.conversation.item.delete(item_id)
await fut
del self._item_deleted_futs[item_id]

async def async_chat_ctx(self, new_ctx: llm.ChatContext) -> None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was more thinking about making the current API async. Feel free to break the API. The RealtimeModel API is still beta :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean to make the self.conversation.item.create and self.conversation.item.delete be async?

Maybe I can move the acreate and adelete to the conversation.item but keep the original version of create and delete, as for some cases like sync_chat_ctx we may want to add all the create and delete messages to the event queue at once and wait at the end?

Copy link
Member

@theomonnom theomonnom Nov 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean to make the self.conversation.item.create and self.conversation.item.delete be async?

Yep.

I don't think we need two functions (sync and async)
What we can do is to just use asyncio.gather when using sync_chat_ctx

Copy link
Collaborator Author

@longcw longcw Nov 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern is that when we call gather(item.acreate(item1), item.acreate(item2)), is there a guarantee for the two items are pushed to the message queue in the correct order?

@theomonnom
Copy link
Member

theomonnom commented Nov 9, 2024

What do you think of an API looking like:

model = RealtimeModel()
model.chat_ctx.append(...)
model.chat_ctx.append(role="user", ...)
await model.sync_chat_ctx() # Here sync_chat_ctx doesn't have arguments

If you want to replace the whole chat_ctx we could do smthg like

model.chat_ctx = my_new_chat_ctx
await model.sync_chat_ctx()

Copy link
Member

@theomonnom theomonnom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See message on Slack, otherwise lgtm!

@longcw
Copy link
Collaborator Author

longcw commented Nov 10, 2024

What do you think of an API looking like:

model = RealtimeModel()
model.chat_ctx.append(...)
model.chat_ctx.append(role="user", ...)
await model.sync_chat_ctx() # Here sync_chat_ctx doesn't have arguments

If you want to replace the whole chat_ctx we could do smthg like

model.chat_ctx = my_new_chat_ctx
await model.sync_chat_ctx()

The following is the current API, the user always needs to grab a copy of the current chat_ctx then modify it

updated_ctx = model.chat_ctx  # maybe make this a method to emphasize this returns a copy of the ctx 
updated_ctx.append(...)
await model.sync_chat_ctx(updated_ctx)

I think your version is more straightforward, but there might be a problem that the internal managed ctx and the user-modifiable ctx might be out of sync when user modified the latter without a immediate await sync_chat_ctx.

For example, if user call model.chat_ctx.append(...) or model.chat_ctx = my_new_chat_ctx without a following await model.sync_chat_ctx() (maybe forget or not in time) before the internal chat ctx is updated with new conversations, the sync_chat_ctx will actually remove the new conversations which may not be expected.

@davidzhao
Copy link
Member

it'd be great to explicitly call out that you are getting a copy of the context.. i.e.

ctx_copy = model.chat_ctx_copy()
...
await model.set_chat_ctx(ctx_copy)

if we are simply replacing, instead of modifying in-place, then I think set_ would be a better API compared to sync_

@longcw
Copy link
Collaborator Author

longcw commented Nov 10, 2024

@davidzhao I have updated the API as you suggested.

Comment on lines 74 to 76
# Add some test context to verify if the sync_chat_ctx works
# FIXME: OAI realtime API does not support this properly when the chat context is too many
# It may answer with the text responses only for some cases
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this comment is now outdated?

Comment on lines 855 to 874
if new_ctx.messages and all(
isinstance(msg.content, str) for msg in new_ctx.messages
):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you check if audio is inside the modality

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to

if new_ctx.messages and not any(
            isinstance(msg.content, llm.ChatAudio) for msg in new_ctx.messages
        ):

@theomonnom
Copy link
Member

theomonnom commented Nov 10, 2024

it'd be great to explicitly call out that you are getting a copy of the context.. i.e.

ctx_copy = model.chat_ctx_copy()
...
await model.set_chat_ctx(ctx_copy)

if we are simply replacing, instead of modifying in-place, then I think set_ would be a better API compared to sync_

This makes sense, but it would be great to still allow in-place edit. Since ppl have access to the model.chat_ctx, they may expect that it is editable.

So adding sync_chat_ctx will start the synchronization after the user did some updates. I think it is OK to have both, set_chat_ctx when you want to update the whole context. or simply call sync_chat_ctx if you directly did some small changes.
Wdyt?

@theomonnom
Copy link
Member

theomonnom commented Nov 10, 2024

Awesome! Only small nits related to the API are remaining. Can you also fix the type-check CI?


from livekit.plugins.openai import realtime

@self._session.on("response_content_added")
def _on_content_added(message: realtime.RealtimeContent):
if message.content_type == "text":
logger.warning(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

During my testing there is still a chance the agent response in text mode, should we make this an error log with more details, and emit an event so people can decide to restart the agent in their script?

@longcw
Copy link
Collaborator Author

longcw commented Nov 11, 2024

it'd be great to explicitly call out that you are getting a copy of the context.. i.e.

ctx_copy = model.chat_ctx_copy()
...
await model.set_chat_ctx(ctx_copy)

if we are simply replacing, instead of modifying in-place, then I think set_ would be a better API compared to sync_

This makes sense, but it would be great to still allow in-place edit. Since ppl have access to the model.chat_ctx, they may expect that it is editable.

So adding sync_chat_ctx will start the synchronization after the user did some updates. I think it is OK to have both, set_chat_ctx when you want to update the whole context. or simply call sync_chat_ctx if you directly did some small changes. Wdyt?

Actually it has only model.chat_ctx_copy() now. As model.chat_ctx is always a copy of the current ctx, we may don't want user to do something like model.chat_ctx.append().

@longcw longcw force-pushed the fix/write-transcripts-to-context branch from 2e1f551 to eb8ac4e Compare November 11, 2024 05:49
@longcw longcw force-pushed the fix/write-transcripts-to-context branch from c27ba3e to dd74f0f Compare November 11, 2024 06:05
@longcw
Copy link
Collaborator Author

longcw commented Nov 11, 2024

Awesome! Only small nits related to the API are remaining. Can you also fix the type-check CI?

Fixed.

@davidzhao
Copy link
Member

This makes sense, but it would be great to still allow in-place edit. Since ppl have access to the model.chat_ctx, they may expect that it is editable.

It'll more likely race when users edit in place while the realtime API may add to it. it can lead to a bit of unpredictability in terms of what is in the chat history.

# Create a task to wait for initialization and start the main task
async def _init_and_start():
try:
await self._session._init_sync_task

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weirdly, I was testing out this branch using the virtual assistant sandbox playground had an issue in which the agent wouldn't start, while waiting for this task to complete. Does that make sense at all? I removed it temporarily from my local venv and the agent works fine, albiet after a 2 second waiting period to sync the initial context.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you share the package and the version you installed with the issue? I can take a look and try to reproduce it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the latest commit actually helped fix the issue, apologies! Thanks for the tip that it could've been an issue with the commit I was on.

@theomonnom
Copy link
Member

theomonnom commented Nov 11, 2024

This makes sense, but it would be great to still allow in-place edit. Since ppl have access to the model.chat_ctx, they may expect that it is editable.

It'll more likely race when users edit in place while the realtime API may add to it. it can lead to a bit of unpredictability in terms of what is in the chat history.

Stuff can't race if the whole "edit code" is sync and the first async fnc to be called is model.sync_chat_ctx()

@davidzhao
Copy link
Member

Stuff can't race if the whole "edit code" is sync and the first async fnc to be called is model.sync_chat_ctx()

It's more that there may be content added to the context that that's unexpected by the dev. imagine the following:

  • user is speaking to agent
  • dev calls .append() to prompt the LLM to return something else
  • user transcript committed, and gets added to the end of context
  • dev calls .sync_chat_ctx(), which actually syncs the user prompt, instead of dev prompt

the overall point is that if two separate processes could modify chat_context, the order and content of the message history would end up being unpredictable.

@theomonnom
Copy link
Member

Stuff can't race if the whole "edit code" is sync and the first async fnc to be called is model.sync_chat_ctx()

It's more that there may be content added to the context that that's unexpected by the dev. imagine the following:

  • user is speaking to agent
  • dev calls .append() to prompt the LLM to return something else
  • user transcript committed, and gets added to the end of context
  • dev calls .sync_chat_ctx(), which actually syncs the user prompt, instead of dev prompt

the overall point is that if two separate processes could modify chat_context, the order and content of the message history would end up being unpredictable.

Agree it could create more confusion. Let's see if ppl are asking for direct access.

@theomonnom theomonnom changed the title feat: sync chat_ctx for multimodal agents from and to the realtime session feat: sync chat_ctx for openai RealtimeModel from and to the remote realtime session Nov 12, 2024
@theomonnom theomonnom merged commit 74f00c3 into livekit:main Nov 12, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants