-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configuration validation and annotations on errors #80
Configuration validation and annotations on errors #80
Comments
I think its fine given that we can make 5000 requests per hour, per organization, before we get rate limited as an app. Lets keep it simple and not do the separate Worker. Config changes do not happen regularly. |
@0x4007 What you say is valid for the GitHub API. But Workers are limited to 50 requests per run / instance. |
We can do this in a plugin instead? |
I thought about it, but it means that the plugin should have access to the private repo containing the configuration and should be able to read / write which seems dangerous. Or maybe we can run that plugin within the configuration repo itself. But if we do so we cannot handle per repo based configurations I think |
Cool idea
Research will reveal the answer! |
/start |
Tips:
|
What should be done beyond the kernel side changes: |
Can you elaborate this isn't clear to others |
The kernel workflow should be the following:
The v1 had a similar workflow. I think this is important to have this functionality specially for newcomers that could be confused about the configuration, or avoid breaking it during development and updates. |
@0x4007 I've seen you mentioning an issue about the base multiplier having changed for v2, and indeed it was reset to |
@0x4007 Would something similar to the following format be satisfactory? It gives the error and link the corresponding line with a preview: |
Also use the [!CAUTION]
syntax The red one whatever the syntax is |
Progress update: Maybe that would be too noisy since a user would get tag each time a plugin is done validating. So maybe what we can consider:
Downsize of reducing noise by editing the message is that every plugin will be async and maybe errors keep coming after the user thought the configuration was valid. What do you think? Another run example: https://github.com/Meniole/ubiquibot-config/commit/3e9152e3eadd98ef20b10bfba7529533ed392c46 |
Yes do both of your suggestions |
Oh I thought the validator is one and done inside the kernel. This approach across every plugin seems wrong. This can be dynamic by reading the plugin ajv validator files |
@0x4007 I think it makes more sense to have it within the plugin itself, we could even have it within our SDK for simplicity. I do not see how the kernel can understand plugin configurations. |
My idea was to import and run the ajv validation code. Could be risky but maybe there's a way to quarantine it in a way that's not too complex. |
We do not use Also, in the case of workers, the kernel does not know which repository they refer too, so we should serve the file as plain text on the endpoint which doesn't seem elegant. Another advantage of using directly the plugin is that currently it also detects invalid GitHub / Worker environment variables which is very helpful. |
I think we need to figure this out eventually because of the marketplace/plugin installer feature |
I think it is quite straightforward for Workers, too slow for Actions. Maybe eventually we will need to have an endpoint for all the plugins. I don't see how we can have the kernel itself handle this because:
If we can solve all of these then the kernel should be able to just import the configuration type files and read them. |
My vision is not too different from a Docker-like approach. We can spin up a virtual shell as a child process, which then runs its own node.js instance.
I think we just need to send everything, although I'm not fully understanding the context of this statement.
As in,
Code injection might be acceptable within a virtual shell.
The virtual shell should not have access to the environment secrets.
|
Workers do not run I do not understand the benefit of heavily complexifying this on the kernel, when we already have running endpoints that support all the logic (and Actions are literally dockers themselves). Also I do not understand what Knip is needed for? Practical example: How can I check that the configuration provided is valid against this? |
One expensive idea is to consume the type with It would be a bit cheaper if we standardize the location of the payload type checker file. This seems like a bad approach to scale. We could consider making this a standalone plugin which is expensive to run but we can allow partners to opt-out if its too expensive? In conclusion I don't see a great cheap solution. I think we should handle it inside of the plugins as you are doing, and then in the future we automate plugin configuration using a GUI which can prevent misconfiguration? Perhaps we enforce a standard for plugin developers to follow for it to populate on the GUI and to be configurable? |
I think running in their own plugin is the cheapest way we can use for now. For the GUI, it would be no problem with Worker ones as the response would be instantaneous on a bad configuration, but Actions would take like a minute to validate which would be a bummer for a UI, so that's the part we should figure out. One thing that @whilefoo pointed out is serve the configuration schema through the manifest, which could have been a solution if we find a way to compile the schema in such a way it contains every needed info that could be interpreted by the kernel correctly. |
Yeah my first idea was that worker plugins could serve configuration schema through the manifest but that's too cumbersome to write in the manifest so instead it could just convert typebox schema to json schema and send it over a dedicated endpoint, but the problem is with action plugins because we can't get a fast and sync response |
@whilefoo Based on what you said, maybe then there would be an approach that would allows both Actions and Workers to have a fast response. For workers, we could simply serve the JSON through an endpoint. For Actions, we could have a script automatically generating the JSON file on push events, so the kernel can simply download the file (which would always be at the same locations for plugins) which would be significantly faster than running an Action. That can be a path I can explore as well. |
Admin can make a config change, the Action runs and posts a comment on the commit a few minutes later BUT it tags the author, so they are notified. I think this is acceptable as a first version if the better solutions can't be figured out. |
@0x4007 @whilefoo It usually takes ~30s and it does tag the user. I did some research regarding the usage of a
So it is possible, but comes with drawbacks.
TL;DR JSON is faster, Endpoint + Action give much more accurate errors, so don't know what route we prefer. |
@whilefoo you can make the decision |
What do you mean by environment validation? I think decode validation is not that important and also I think it's more secure if the kernel does validation and not the plugin which can access the configuration.
How would the action know where the configuration schema is located in the codebase? Unless the developer sets a path to the file and name of the variable |
Environment validation meaning validating env The decode can be handy, practical example in the plugin name where "test" would be a valid string, but does not properly represent a plugin URL nor an Action path. This is validated during decode, so only can be picked up plugin side. For the path, we should just rely on a standard location the same way we do for the manifest. Or even have it appended within the manifest itself. |
Standard location (root, sibling of manifest) seems simplest |
In the bot v1, when the configuration is changed, annotations are added to the configuration file if any error is encountered within the configuration. It would be really nice to have for v2 as it is a common scenario to have an invalid configuration and have no feedback about it because the error would only be visible within the worker logs.
The main challenge is that only plugins are aware of their configuration shape, so probably the kernel should call every plugin to check their validity, which would require an endpoint or some access to the configuration validators. At the same time, the
manifest.json
validity could be checked, relating to #78.Since this might imply quite a few network requests, we can also consider having this functionality as a separate Worker using service binding.
The text was updated successfully, but these errors were encountered: