Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tabnine Chat Feedback #92

Open
amirbilu opened this issue Jul 12, 2023 · 53 comments
Open

Tabnine Chat Feedback #92

amirbilu opened this issue Jul 12, 2023 · 53 comments

Comments

@amirbilu
Copy link
Contributor

This is the place to leave feedback / discuss issues on Tabnine Chat for nvim.
Note this feature is still in BETA - to join the BETA - send us your Tabnine Pro email to [email protected].

@shuxiao9058
Copy link

Recently I have port TabNine Chat to Emacs

https://github.com/shuxiao9058/tabnine

@amirbilu
Copy link
Contributor Author

amirbilu commented Jul 18, 2023

@shuxiao9058 this is awesome!!! Please leave us a message at [email protected] to get Tabnine Pro credits and Tabnine swag.

@shuxiao9058
Copy link

@shuxiao9058 this is awesome!!! Please leave us a message at [email protected] to get Tabnine Pro credits and Tabnine swag.

Thanks, @amirbilu email already send.

@chuckpr
Copy link

chuckpr commented Jul 19, 2023

Been experimenting with Tabnine Chat in Neovim. Works well except that I get this message frequently:

Error executing vim.schedule lua callback: ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:60: Expected value but found unexpected end of string at character 8193
stack traceback:
        [C]: in function 'decode'
        ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:60: in function 'cb'
        vim/_editor.lua:325: in function <vim/_editor.lua:324>

@amirbilu
Copy link
Contributor Author

amirbilu commented Jul 19, 2023 via email

@aarondill
Copy link
Contributor

That's the JSON decode function, which means the chat binaries are outputting a JSON which is not syntactically correct (idk why or how though)

@amirbilu
Copy link
Contributor Author

amirbilu commented Jul 19, 2023 via email

@amirbilu
Copy link
Contributor Author

Been experimenting with Tabnine Chat in Neovim. Works well except that I get this message frequently:

Error executing vim.schedule lua callback: ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:60: Expected value but found unexpected end of string at character 8193
stack traceback:
        [C]: in function 'decode'
        ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:60: in function 'cb'
        vim/_editor.lua:325: in function <vim/_editor.lua:324>

Can you please try this #94, you should get a nicer debug message. When you get it - attach it here. Appreciate it!

@chuckpr
Copy link

chuckpr commented Jul 20, 2023

Ok, here are the messages I am seeing using Chat built from the debug-message branch:

[tabnine-nvim] Failed to decode chat message: {"id":"25","command":"update_chat_conversation","data":{"id":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","messages":[{"id":"c2bd9e03-2eb1-46e7-a591-1d334959296c","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"/explain-code","isBot":false,"timestamp":"1689879814043","intent":"explain-code","editorContext":{"fileCode":"import altair as alt\nimport pandas as pd\nfrom vega_datasets import data\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\nprint(melted_cars.head())import altair as alt\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\n","selectedCode":"","selectedCodeUsages":[]},"retrievalContext":{"documents":[]}},{"id":"24cc71cd-95db-4493-8e0c-42620fc5af85","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"The selected code is a Python script that imports the Altair and Pandas libraries, loads the cars dataset from the Vega dataset, creates a bar chart using the Altair library, and displays a jitter chart.\n\nThe script starts by importing the Altair and Pandas libraries:\n```python\nimport altair as alt\nimport pandas as pd\n```\n\nIt then loads the cars dataset from the Vega dataset and creates a bar chart using the Altair library:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n```\n\nNext, the script loads the cars dataset again and uses Pandas to melt the data frame:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n```\n\nThe script then creates a jitter chart for all the values in the melted_cars dataframe using Altair:\n```python\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n```\n\nFinally, the script displays the jitter chart:\n```python\n# Display the jitter chart\njitter_chart\n```","isBot":true,"timestamp":"1689879832134"},{"id":"c52306dc-6b04-44b7-ab57-f2201531f047","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"/explain-code","isBot":false,"timestamp":"1689879843080","intent":"explain-code","editorContext":{"fileCode":"import altair as alt\nimport pandas as pd\nfrom vega_datasets import data\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\nprint(melted_cars.head())import altair as alt\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.C
[tabnine-nvim] Failed to decode chat message: hart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\n","selectedCode":"# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n","selectedCodeUsages":[]},"retrievalContext":{"documents":[]}},{"id":"39850d60-2884-449b-88c9-32cedbb5eda1","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"The selected code is a Python script that uses Pandas to melt a data frame.\n\nThe script starts by loading the cars dataset from the Vega dataset and using Pandas to create an id_vars list and a value_vars list:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n```\n\nThe id_vars list contains the columns \"Name\" and \"Miles_per_Gallon\", while the value_vars list contains the columns \"Horsepower\", \"Cylinders\", \"Displacement\", \"Weight_in_lbs\", \"Acceleration\", and \"Year\".\n\nThe script then uses Pandas to melt the data frame, which creates a new column for each value variable and combines the id_vars into a single \"variable\" column.","isBot":true,"timestamp":"1689879851807"}]}}

I see these mesagges after highlighting some code and using /explain-code in Chat.

@gunslingerfry
Copy link

gunslingerfry commented Jul 20, 2023

The rust package has an implicit dependency on webkit2gtk-4.1 in case anybody is having difficulty compiling.

edit: even with the package I'm getting linker errors. My version of webkit2gtk-4.1 may be too new? I'm not familiar enough with rust to figure this out.

EndeavourOS (Arch)
NVIM v0.9.1
libwebkit2gtk-4.1 version: 0.8.4
rust version: 1.71.0 (just updated from rustup)

@gunslingerfry
Copy link

Here is a gist with the error output so I don't spam this thread: https://gist.github.com/gunslingerfry/8a8bcd1adeba6c8aba017a9dce0714a3

@aarondill
Copy link
Contributor

Ok, here are the messages I am seeing using Chat built from the debug-message branch:


[tabnine-nvim] Failed to decode chat message: {"id":"25","command":"update_chat_conversation","data":{"id":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","messages":[{"id":"c2bd9e03-2eb1-46e7-a591-1d334959296c","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"/explain-code","isBot":false,"timestamp":"1689879814043","intent":"explain-code","editorContext":{"fileCode":"import altair as alt\nimport pandas as pd\nfrom vega_datasets import data\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\nprint(melted_cars.head())import altair as alt\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\n","selectedCode":"","selectedCodeUsages":[]},"retrievalContext":{"documents":[]}},{"id":"24cc71cd-95db-4493-8e0c-42620fc5af85","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"The selected code is a Python script that imports the Altair and Pandas libraries, loads the cars dataset from the Vega dataset, creates a bar chart using the Altair library, and displays a jitter chart.\n\nThe script starts by importing the Altair and Pandas libraries:\n```python\nimport altair as alt\nimport pandas as pd\n```\n\nIt then loads the cars dataset from the Vega dataset and creates a bar chart using the Altair library:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n```\n\nNext, the script loads the cars dataset again and uses Pandas to melt the data frame:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n```\n\nThe script then creates a jitter chart for all the values in the melted_cars dataframe using Altair:\n```python\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n```\n\nFinally, the script displays the jitter chart:\n```python\n# Display the jitter chart\njitter_chart\n```","isBot":true,"timestamp":"1689879832134"},{"id":"c52306dc-6b04-44b7-ab57-f2201531f047","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"/explain-code","isBot":false,"timestamp":"1689879843080","intent":"explain-code","editorContext":{"fileCode":"import altair as alt\nimport pandas as pd\nfrom vega_datasets import data\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\nprint(melted_cars.head())import altair as alt\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.C

[tabnine-nvim] Failed to decode chat message: hart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\n","selectedCode":"# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n","selectedCodeUsages":[]},"retrievalContext":{"documents":[]}},{"id":"39850d60-2884-449b-88c9-32cedbb5eda1","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"The selected code is a Python script that uses Pandas to melt a data frame.\n\nThe script starts by loading the cars dataset from the Vega dataset and using Pandas to create an id_vars list and a value_vars list:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n```\n\nThe id_vars list contains the columns \"Name\" and \"Miles_per_Gallon\", while the value_vars list contains the columns \"Horsepower\", \"Cylinders\", \"Displacement\", \"Weight_in_lbs\", \"Acceleration\", and \"Year\".\n\nThe script then uses Pandas to melt the data frame, which creates a new column for each value variable and combines the id_vars into a single \"variable\" column.","isBot":true,"timestamp":"1689879851807"}]}}

I see these mesagges after highlighting some code and using /explain-code in Chat.

It seems like something is going wrong, which is putting a newline into the json message.

Something in the returned file code

@aarondill
Copy link
Contributor

@chuckpr If you don't mind, can you attempt to create a minimal file where you can reproduce this, and share it (perhaps in a gist)? I suspect that something in the handling of user code (perhaps a length issue?) is going wrong, so being able to reproduce this locally would be very helpful.

@chuckpr
Copy link

chuckpr commented Jul 21, 2023

Sure, to reproduce the error, I ran /expain-code five times. On the fifth invocation, I started to see the error.

Gist: https://gist.github.com/chuckpr/8a67b3685b9631f4d633821143df3747

@amirbilu
Copy link
Contributor Author

@aarondill did you manage to reproduce this? it seems to work fine for me

@aarondill
Copy link
Contributor

@aarondill did you manage to reproduce this? it seems to work fine for me

I haven't had the time to try to. I won't be able to try for at least a few days.

@shuxiao9058
Copy link

Recently I have port TabNine Chat to Emacs

https://github.com/shuxiao9058/tabnine

TabNine for Emacs now is on Melpa

@gunslingerfry
Copy link

Yay! @aarondill got me sorted. I had out of sync packages. Silly me not trying a system upgrade.

@aarondill
Copy link
Contributor

@aarondill did you manage to reproduce this? it seems to work fine for me

@amirbilu Having just compiled and tested this on my machine, I can't seem to reproduce the error.

@chuckpr
Can you still reproduce this issue on your machine?
If so, does rerunning dl_binaries.sh fix the issue?
If it does not, can you provide detailed system information and reproduction steps using the templates below?

An example for system information:

> uname -a
results_here
> cat /etc/os-release || cat /usr/lib/os-release
results_here
> cd /path/to/tabnine-nvim
> ls -A ./binaries
results_here
> cat chat_state.json
results_here
> cat ./chat/target/.rustc_info.json
results_here

Reproduction steps (an example):

  1. install tabnine-nvim using this file (FILENAME):
contents of file
  1. open nvim test.py
  2. Go to line #
  3. Press V
  4. Select lines # through #
  5. Press : and type TabnineChatNew
  6. Type /explain-code repeatedly (5 times?)
  7. See Error executing vim.schedule lua callback... error in original nvim window.

@aarondill

This comment was marked as outdated.

@aemonge
Copy link

aemonge commented Aug 2, 2023

Some feedback from me, and from neovim.

  • Would be super useful to have the ability to select text for context. I've noticed the chat uses my current buffer as context for my questions, and I rarely want it to reply with a full file suggestion; usually I'm in querying for a specific function.
  • A vi-mode for the input would be really nice. I know you can have binding to input-rc for emacs, or vi. That would be good.
  • CLI chat would be cool too. We often forget commands, or want to "unit" test bulk files.

Finally, this isn't a request but just awareness. I'm paying for chat-GPT4 mainly to develop, so if this chat is smarter or same-ish than GPT-4 I wouldn't mind migrating my payment from GPT4 to here :). Having the chat editor-integrated and narrow to development, it's what I'm looking for and craving.

Furthermore, please take this feedback as it is, positive feedback from a delighted customer. <3

@amirbilu
Copy link
Contributor Author

amirbilu commented Aug 2, 2023 via email

@allan-simon
Copy link

Hello my neovim is running inside a dockerized environment (so without X) , so the same as it possible to get tabnine hub opening by doing port redirection, is there a way to open the chat from my host and point it to my neovim instance ?

@gsharma-jiggzy
Copy link

It would be nice to have vim native like chatgpt.nvim

https://github.com/jackMort/ChatGPT.nvim

@aemonge
Copy link

aemonge commented Sep 14, 2023

Or a simple terminal-based integration such as https://github.com/kardolus/chatgpt-cli , this could serve more user that only neovim ones. And us neovim can simply :terminal chatgtp-cli. Right @gsharma-jiggzy ?

@nfwyst
Copy link

nfwyst commented Oct 2, 2023

chat is not easy to use, maybe the model should be upgraded....

for example, i have code like:

function Hello(x) {
  console.log("Hello" + " " + x);
}

Hello("marvin");

tabnine's answer is

截屏2023-10-02 19 35 56

this is a good start point...

@AlexanderShvaykin
Copy link

I have the error:

tabnine-nvim/lua/tabnine/chat/codelens.lua:89: attempt to index field 'range' (a nil value)

@AlexanderShvaykin
Copy link

After responding from the chat, I get an error message

Error executing vim.schedule lua callback: ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:82: Expected value but found unexpected end of string at character 8193
stack traceback:
        [C]: in function 'decode'
        ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:82: in function ''
        vim/_editor.lua: in function <vim/_editor.lua:0>

@amirbilu
Copy link
Contributor Author

amirbilu commented Dec 6, 2023

Hi @AlexanderShvaykin does it happen constantly?

@MJAS1
Copy link

MJAS1 commented Dec 6, 2023

I am trying to get TabnineChat to work, but the command only opens a blank window with nothing in it. I have not sent an email to [email protected] to request chat to be enabled as I noticed that the instruction asking to do so was removed from the README. Was it removed on purpose?

@amirbilu
Copy link
Contributor Author

amirbilu commented Dec 6, 2023 via email

@MJAS1
Copy link

MJAS1 commented Dec 6, 2023

I am on Fedora39 and was first trying to open it using i3wm which uses X11. I then tried with Plasma desktop using Wayland and the chat worked there. Next, I tried using Plasma+X11 and again I got a blank window. So it seems to be related to X11. Screenshot below
Screenshot_20231206_223617

@Mate2xo
Copy link

Mate2xo commented Dec 10, 2023

Hi,
It looks like that the codelens do not work for some languages ? For example, :TabnineExplain command works for lua and js files, but not for ruby. Though the chat sees and understand the ruby content : I can ask questions and require test generation directly from the chat window.
From what I understand, the codelens might not set a symbol_under_cursor for ruby programs (didn't see any error messages). The behaviour is the same for all commands in https://github.com/codota/tabnine-nvim/blob/master/lua/tabnine/chat/user_commands.lua

If Ruby and/or other languages are not fully supported yet, it might be useful to mention it in the Readme

@Mate2xo
Copy link

Mate2xo commented Dec 12, 2023

I finally encountered an error on a ruby file :

Error executing vim.schedule lua callback: ...pack/lazy/opt/tabnine-nvim/lua/tabnine/chat/codelens.lua:89: attempt to index field 'range' (a nil value)                                                                                                                                                                         
stack traceback:                                                                                                                                                                                                                                                                                                                
        ...pack/lazy/opt/tabnine-nvim/lua/tabnine/chat/codelens.lua:89: in function 'is_symbol_under_cursor'                                                                                                                                                                                                                    
        ...pack/lazy/opt/tabnine-nvim/lua/tabnine/chat/codelens.lua:100: in function 'on_collect'                                                                                                                                                                                                                               
        ...pack/lazy/opt/tabnine-nvim/lua/tabnine/chat/codelens.lua:65: in function 'callback'                                                                                                                                                                                                                                  
        /usr/share/nvim/runtime/lua/vim/lsp.lua:2020: in function 'handler'                                                                                                                                                                                                                                                     
        /usr/share/nvim/runtime/lua/vim/lsp.lua:1393: in function ''                                                                                                                                                                                                                                                            
        vim/_editor.lua: in function <vim/_editor.lua:0>

This kind of error appears apparently randomly, couldn't find why, and disappears when launching a new neovim instance

@amirbilu
Copy link
Contributor Author

@Mate2xo fixed by 3237a28

@amirbilu
Copy link
Contributor Author

Hi, It looks like that the codelens do not work for some languages ? For example, :TabnineExplain command works for lua and js files, but not for ruby. Though the chat sees and understand the ruby content : I can ask questions and require test generation directly from the chat window. From what I understand, the codelens might not set a symbol_under_cursor for ruby programs (didn't see any error messages). The behaviour is the same for all commands in https://github.com/codota/tabnine-nvim/blob/master/lua/tabnine/chat/user_commands.lua

If Ruby and/or other languages are not fully supported yet, it might be useful to mention it in the Readme

do you have lsp set for ruby? if yes, what lsp are you using? appreciate if you can provide an example file as well

@beemdvp
Copy link

beemdvp commented Dec 15, 2023

I am on Fedora39 and was first trying to open it using i3wm which uses X11. I then tried with Plasma desktop using Wayland and the chat worked there. Next, I tried using Plasma+X11 and again I got a blank window. So it seems to be related to X11. Screenshot below Screenshot_20231206_223617

Damn you got past me, im on fedora 39 too and after struggling trying to install related system libs, I get a completely blank white screen with nothing rendered hahah

image

@amirbilu
Copy link
Contributor Author

hi @beemdvp do you see anything in :messages ?

@beemdvp
Copy link

beemdvp commented Dec 17, 2023

Hey @amirbilu there isnt there. I did notice though that if i try to hold left click and drag, there is actually content there. But for some reason everything is white/blank. So elements seem to render? Just not showing with the right styles maybe? Wonder if it could be some sort of file permissions issue

@fcabjolsky
Copy link

Hello I'm trying to debug this issue. Where could i find the source code of the chat (the source code of index.html)

@beemdvp
Copy link

beemdvp commented Feb 4, 2024

okay i've switched to fully amd machine (noice) using fedora. I've noticed a bug:

  1. highlight block of code
  2. generate response from chat
  3. stop response generation
  4. shows error

Neovim version:
image

Error output:
image

@aarondill
Copy link
Contributor

it seems the binaries are outputting something that is not valid json. i wouldn't be able to guess what this is though.
personally, I think we should catch decoding errors and output our own error that includes the data that failed. this would make debugging these problems much easier.

@Askath
Copy link

Askath commented Feb 5, 2024

okay i've switched to fully amd machine (noice) using fedora. I've noticed a bug:

  1. highlight block of code
  2. generate response from chat
  3. stop response generation
  4. shows error

Neovim version: image

Error output: image

Oh yeah, I have had the same bug on Mac OS, regardless of if a response is generated successfully or not.

@sudoFerraz
Copy link

Chat worked great on the first day I used it (yesterday)

From today on, after the first completion, tabnine completely stops working, unless there was a new update rolled out on the last 2 days, I think the CHAT completely messed my setup tho 😢

TabnineStatus still outputs normally and says that I'm a pro user;

@Askath
Copy link

Askath commented Feb 7, 2024

Chat worked great on the first day I used it (yesterday)

From today on, after the first completion, tabnine completely stops working, unless there was a new update rolled out on the last 2 days, I think the CHAT completely messed my setup tho 😢

TabnineStatus still outputs normally and says that I'm a pro user;

What language Server do you have installed? I noticed that when I have an lsp running, that does not provide symbol documentation like angularls tabnine stops working depending on the order in which lsps have been started

@sudoFerraz
Copy link

Chat worked great on the first day I used it (yesterday)
From today on, after the first completion, tabnine completely stops working, unless there was a new update rolled out on the last 2 days, I think the CHAT completely messed my setup tho 😢
TabnineStatus still outputs normally and says that I'm a pro user;

What language Server do you have installed? I noticed that when I have an lsp running, that does not provide symbol documentation like angularls tabnine stops working depending on the order in which lsps have been started

I don't think that could be related, as I was working on the same project when tabnine inline completions were working fine, using the same lsps and same setup.
It really feels like there was a silent update on the past 2-3 that broke the suggestions after I accept the first one on my current session.

@amirbilu
Copy link
Contributor Author

@sudoFerraz can you please contact us at [email protected] ?

@Mate2xo
Copy link

Mate2xo commented Feb 14, 2024

Hi, It looks like that the codelens do not work for some languages ? For example, :TabnineExplain command works for lua and js files, but not for ruby. Though the chat sees and understand the ruby content : I can ask questions and require test generation directly from the chat window. From what I understand, the codelens might not set a symbol_under_cursor for ruby programs (didn't see any error messages). The behaviour is the same for all commands in https://github.com/codota/tabnine-nvim/blob/master/lua/tabnine/chat/user_commands.lua
If Ruby and/or other languages are not fully supported yet, it might be useful to mention it in the Readme

do you have lsp set for ruby? if yes, what lsp are you using? appreciate if you can provide an example file as well

Sorry I took that long to respond

I am using the solargraph LSP.
What kind of example file would you like ? The fix that you made resolved the issue, and I could not reproduce the issue anymore (thank you btw).

@Jasha10
Copy link

Jasha10 commented Mar 22, 2024

My feedback is that I'd prefer a chat client implemented in neovim rather than one that uses WebView windowing via the wry crate. Compiling tabnine-nvim/chat is hard on ubuntu because I need to apt-install deps such as libcairo and some gtk-related things.

@aarondill
Copy link
Contributor

@Jasha10 the chat client is currently shared with the VSCode extension, so I don't see this happening. I agree that a NeoVim-specific client would be better though. (I am not a maintainer of this repo)

@obarisk
Copy link

obarisk commented Jun 17, 2024

No idea how to provide more infomation. but just find that

  1. chat hanging with Workspace indexing: Not yet started
  2. use /document-code with open file and code selected in visual mode, chat shows The "/document-code" command requires an open file. Please open a file and press 'Continue'.
  3. nothing happen when cursor within a code block in visual mode and calls TabnineTest, TabnineExplain, and TabnineFix

@pgib
Copy link

pgib commented Jul 30, 2024

Also wondering about Workspace indexing: Not yet started ... when does it start?

@pgib
Copy link

pgib commented Jul 30, 2024

I'd also love to be able to set a preamble context for my chats. For example, in ChatGPT, I have custom instructions that tell it to always use two spaces for indentation for any generated code. I'm going to try a workaround using a custom command, but I'll need to remember to do this before I start each chat.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests