You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an additional endpoint, perhaps called llm-assisted-code2amr, which will use the Linespan endpoint we currently have to sub-select the code to only the relevant part for an AMR extraction and send that through the code-snippets-2AMR pipeline.
Some notes:
We want this endpoint and the Linespan both available in the unified service. TA-4 is interested in using the linespan endpoint alone as well for sending things to our snippets endpoint with HMI. This is also why the output of the linespan endpoint is what it is, since they already had support for that data structure.
The linespan endpoint currently uses GPT3.5 for the extraction. This is temporary until we replace it with our own model that operates on function networks. A downside of the LLM model (besides response time) is that it operates on the source code itself, so it currently only operates on one code file, despite us calling it a codebase2amr, it will only work for one file in the zip until we update the model to our own. I didn't think it was worth the effort to engineer it to handle a codebase of arbitrary size, since we will be replacing it hopefully soon, but that is an option and I thought worth noting.
The text was updated successfully, but these errors were encountered:
## Summary of changes
Adds a new workflow endpoint to skema.rest
`llm-assisted-codebase-to-pn-amr` that slices the source code based on
model dynamic linespans determined by an llm. This greatly increases the
accuracy of AMR generation.
Enables support for generating AMR for the CHIME-SIR model, which was
previously failing with the normal `codebase-to-pn-amr` endpoint.
Adds a basic test case for testing CHIME-SIR->AMR generation.
Resolves#621Resolves#628
---------
Co-authored-by: Justin <[email protected]>
## Summary of changes
Adds a new workflow endpoint to skema.rest
`llm-assisted-codebase-to-pn-amr` that slices the source code based on
model dynamic linespans determined by an llm. This greatly increases the
accuracy of AMR generation.
Enables support for generating AMR for the CHIME-SIR model, which was
previously failing with the normal `codebase-to-pn-amr` endpoint.
Adds a basic test case for testing CHIME-SIR->AMR generation.
Resolves#621Resolves#628
---------
Co-authored-by: Justin <[email protected]> e740ac1
This is an additional endpoint, perhaps called llm-assisted-code2amr, which will use the Linespan endpoint we currently have to sub-select the code to only the relevant part for an AMR extraction and send that through the code-snippets-2AMR pipeline.
Some notes:
The text was updated successfully, but these errors were encountered: