Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to add Custom OpenAI provider (Unexpected colon in URL) #63

Closed
bexem opened this issue Sep 1, 2024 · 6 comments
Closed

Unable to add Custom OpenAI provider (Unexpected colon in URL) #63

bexem opened this issue Sep 1, 2024 · 6 comments
Assignees
Labels
bug Something isn't working

Comments

@bexem
Copy link

bexem commented Sep 1, 2024

Bug Description

When configuring a Custom OpenAI provider in Home Assistant's LLMVision integration, the system is incorrectly parsing the provided endpoint URL. It's adding an extra colon (":") after the port number, which is causing connection failures. This occurs even when the input URL is correctly formatted.

Version:

  • LLM Vision: 1.1.0
  • Home Assistant 204.8.3

Attempted configuration

Custom Endpoint: http://192.168.4.204:5001/v1
API Key: test

Logs

2024-09-01 17:07:35.589 DEBUG (MainThread) [custom_components.llmvision.config_flow] Splits: 3
2024-09-01 17:07:35.589 DEBUG (MainThread) [custom_components.llmvision.config_flow] Connecting to: [protocol: http, base_url: 192.168.4.204:5001, port: :, endpoint: /v1/models]
2024-09-01 17:07:35.589 DEBUG (MainThread) [custom_components.llmvision.config_flow] Connecting to http://192.168.4.204:5001:/v1/models
2024-09-01 17:07:35.589 ERROR (MainThread) [custom_components.llmvision.config_flow] Could not connect to http://192.168.4.204:5001:/v1/models: http://192.168.4.204:5001:/v1/models
2024-09-01 17:07:35.589 ERROR (MainThread) [custom_components.llmvision.config_flow] Could not connect to Custom OpenAI server.
2024-09-01 17:07:35.590 ERROR (MainThread) [custom_components.llmvision.config_flow] Validation failed: handshake_failed

Additional context

  1. The custom OpenAI-compatible server (koboldcpp) is working correctly and can be accessed directly using the URL http://192.168.4.204:5001/v1 (I'm using successfully with OpenWebUI).
  2. The server does not require an API key.
  3. The issue appears to be in the URL parsing logic of the LLMVision integration, which is adding an unexpected colon after the port number. (Or am I configuring it incorrectly?) The correct URL should be http://192.168.4.204:5001/v1/models, but the system is trying to connect to http://192.168.4.204:5001:/v1/models (note the extra colon).
@bexem bexem added the bug Something isn't working label Sep 1, 2024
@bexem
Copy link
Author

bexem commented Sep 2, 2024

I think I've made some progress with the code by myself, I hope it actually helps rather just confuse you even more.
The custom OpenAI configuration can now be added successfully. However, I've encountered an unexpected issue during testing.

The problem appears to stem from KoboldCPP using a different API endpoint for image description than standard OpenAI implementations. While the configuration works smoothly with OpenWebUI (which I had assumed used the same OpenAI endpoint and completions), KoboldCPP seems to diverge from this standard. At least this is what I think, I might be completely wrong and I'm happy to corrected.

When attempting to use the service/action, I receive an error indicating that the model doesn't exist. To investigate further, I added more debugging lines to the custom component, including code to fetch and log available models. The component appears to parse the models correctly (detecting just one in my case), but curiously, no interaction was logged by KoboldCPP on its end.

import urllib.parse

    async def custom_openai(self):
        self._validate_provider()
        try:
            url = self.user_input[CONF_CUSTOM_OPENAI_ENDPOINT]
            parsed = urllib.parse.urlparse(url)
            
            protocol = parsed.scheme
            base_url = parsed.hostname
            port = f":{parsed.port}" if parsed.port else ""
            
            # Use the path from the input URL if it exists, otherwise use "/v1"
            path = parsed.path if parsed.path and parsed.path != "/" else "/v1"
            
            # Ensure the path ends with "/models"
            if not path.endswith("/models"):
                path = path.rstrip("/") + "/models"
            
            endpoint = path
            header = {
                'Content-type': 'application/json',
                'Authorization': f'Bearer {self.user_input[CONF_CUSTOM_OPENAI_API_KEY]}'
            }
            
            _LOGGER.debug(
                f"Connecting to: [protocol: {protocol}, base_url: {base_url}, port: {port}, endpoint: {endpoint}]")
            
        except Exception as e:
            _LOGGER.error(f"Could not parse endpoint: {e}")
            raise ServiceValidationError("endpoint_parse_failed")

        if not await self._handshake(base_url=base_url, port=port, protocol=protocol, endpoint=endpoint, header=header):
            _LOGGER.error("Could not connect to Custom OpenAI server.")
            raise ServiceValidationError("handshake_failed")

@valentinfrlch
Copy link
Owner

Thanks for the detailed error report and the code! I'll look into this

@valentinfrlch
Copy link
Owner

Your parsing is much cleaner than the mess it was before, so using this for now.
Connecting to the server works now, but making the request might still fail (though I tried this with open webui which doesn't state that it actually implements openai's endpoints).

I'll release a beta so that you can test it. Any feedback is welcome!

@bexem
Copy link
Author

bexem commented Sep 2, 2024

I'm happy it worked! My brain suddenly remembered when I woke up that I hadn't included the filename for the code 😅

I've tested the beta and I managed to add my koboldcpp (thank you!) but still erroring when trying to use the service as it says the model doesn't exist.

I'm sure it has something to do with the openai's endpoint used by koboldcpp itself. Unfortunately I'll be working the next two nights so I'm not gonna be of much help.

Anyways, thank you so much!

@valentinfrlch
Copy link
Owner

Thanks for testing!
I got the same error for open webui. It's actually not that it doesn't find the model. It just received a 404 error (not found) and usually this indicates that the model doesn't exist. In this case however I'm pretty sure it's the endpoint that doesn't exist.

I will try this tomorrow with LocalAI just to be sure, as I know that its API implements the same endpoints as OpenAI.

@valentinfrlch
Copy link
Owner

Closing this for now. Feel free to reopen if you need more help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants