Skip to content

v0.2.0rc1

Pre-release
Pre-release
Compare
Choose a tag to compare
@chottolabs chottolabs released this 05 Sep 17:32
· 86 commits to main since this release
f20ee1b

I'm playing with local models now and it was getting out of hand to deal with the subtle differences in parameters + features that I wanted to support, so here's another big breaking change.


Screenshot 2024-09-05 at 4 40 44 AM

The key change was to make it as easy as possible to retrieve data, pipe it straight into an API, and send the outputs somewhere in your nvim buffer without shooting yourself in the foot with the quirks of nvim.

  • want a new feature? write your own variation of invoke_llm
  • want better api features? add a new preset with custom options
  • want better prompts? write your own templates (or pipe some more data into a custom make_data_fn)

Now the functionality from v0.1.0 is actually just a user implementation under lua/kznllm/presets.lua

With the new architecture, it was much easier to do a clean(er) model/preset switcher which will come in handy while I'm doing experiments. Also groq queue times can get up to 15s long (???)

after rewriting this plugin literally 5 times by now, it's now very clear that model + prompts + api spec + feature set should be very tightly coupled together. what I actually want for myself is have the flexibility to implement the whole stack of sub-components from scratch and package it together in a preset.

With this refactor, it's relatively easy to add entirely new features without touching the core of the plugin except all of them will just be treated as user implementation details (i.e. prompt caching, custom params, or fast + slow retrieval)


TODO

  • completion model prompt template is really bad and buggy rn, use it with caution

Full Changelog: v0.1.0...v0.2.0