competitest.nvim
is a testcase manager and checker. It saves you time in competitive programming contests by automating common tasks related to testcase management. It can compile, run and test your solutions across all the available testcases, displaying results in a nice interactive user interface.
- Multiple languages supported: it works out of the box with C, C++, Rust, Java and Python, but other languages can be configured
- Flexible. No strict file-naming rules, optional fixed folder structure. You can choose where to put the source code file, the testcases, the received problems and contests, where to execute your programs and much more
- Configurable (see Configuration). You can even configure every folder individually
- Testcases can be stored in a single file or in multiple text files, see usage notes
- Easily add, edit and delete testcases
- Run your program across all the testcases, showing results and execution data in a nice interactive UI
- Download testcases, problems and contests automatically from competitive programming platforms
- Templates for received problems and contests
- View diff between actual and expected output
- Customizable interface that resizes automatically when Neovim window is resized
- Integration with statusline and winbar
- Customizable highlight groups
NOTE: this plugins requires Neovim ≥ 0.5
Install with vim-plug
:
Plug 'MunifTanjim/nui.nvim' " it's a dependency
Plug 'xeluxee/competitest.nvim'
Install with packer.nvim
:
use {
'xeluxee/competitest.nvim',
requires = 'MunifTanjim/nui.nvim',
config = function() require('competitest').setup() end
}
Install with lazy.nvim
:
{
'xeluxee/competitest.nvim',
dependencies = 'MunifTanjim/nui.nvim',
config = function() require('competitest').setup() end,
}
If you are using another package manager note that this plugin depends on nui.nvim
, hence it should be installed as a dependency.
To load this plugin call setup()
:
require('competitest').setup() -- to use default configuration
require('competitest').setup { -- to customize settings
-- put here configuration
}
To see all the available settings see configuration.
- Your programs must read from
stdin
and print tostdout
. Ifstderr
is used its content will be displayed - A testcase is made by an input and an output (containing the correct answer)
- Input is necessary for a testcase to be considered, while an output hasn't to be provided necessarily
- Testcases can be stored in multiple text files or in a single msgpack encoded file
- You can choose how to store them with
testcases_use_single_file
boolean option in in configuration. By default it's false, so multiple files are used - Storage method can be automatically detected when option
testcases_auto_detect_storage
is true - If you want to change the way already existing testcases are stored see conversion
- You can choose how to store them with
- To store testcases in multiple text files set
testcases_use_single_file
to false - Files naming shall follow a rule to be recognized. Let's say your file is called
task-A.cpp
. If using the default configuration testcases associated with that file will be namedtask-A_input0.txt
,task-A_output0.txt
,task-A_input1.txt
,task-A_output1.txt
and so on. The counting starts from 0 - Of course files naming can be configured: see
testcases_input_file_format
andtestcases_output_file_format
in configuration - Testcases files can be put in the same folder of the source code file, but you can customize their path (see
testcases_directory
in configuration)
- To store testcases in a single file set
testcases_use_single_file
to true - Testcases file naming shall follow a rule to be recognized. Let's say your file is called
task-A.cpp
. If using the default configuration testcases file will be namedtask-A.testcases
- Of course single file naming can be configured: see
testcases_single_file_format
in configuration - Testcases file can be put in the same folder of the source code file, but you can customize its path (see
testcases_directory
in configuration)
Anyway you can forget about these rules if you use :CompetiTest add_testcase
and :CompetiTest edit_testcase
, that handle these things for you.
When launching the following commands make sure the focused buffer is the one containing the source code file.
Launch :CompetiTest add_testcase
to add a new testcase.
Launch :CompetiTest edit_testcase
to edit an existing testcase. If you want to specify testcase number directly in the command line you can use :CompetiTest edit_testcase x
, where x
is a number representing the testcase you want to edit.
To jump between input and output windows press either <C-h>
, <C-l>
, or <C-i>
. To save and close testcase editor press <C-s>
or :wq
.
Of course these keybindings can be customized: see editor_ui
➤ normal_mode_mappings
and editor_ui
➤ insert_mode_mappings
in configuration
Launch :CompetiTest delete_testcase
. If you want to specify testcase number directly in the command line you can use :CompetiTest delete_testcase x
, where x
is a number representing the testcase you want to remove.
Testcases can be stored in multiple text files or in a single msgpack encoded file.
Launch :CompetiTest convert
to change testcases storage method: you can convert a single file into multiple files or vice versa.
One of the following arguments is needed:
singlefile_to_files
: convert a single file into multiple text filesfiles_to_singlefile
: convert multiple text files into a single fileauto
: if there's a single file convert it into multiple files, otherwise convert multiple files into a single file
NOTE: this command only converts already existing testcases files without changing CompetiTest configuration. To choose the storage method to use you have to configure testcases_use_single_file
option, that is false by default. Anyway storage method can be automatically detected when option testcases_auto_detect_storage
is true.
Launch :CompetiTest run
. CompetiTest's interface will appear and you'll be able to view details about a testcase by moving the cursor over its entry. You can close the UI by pressing q
, Q
or :q
.
If you're using a compiled language and you don't want to recompile your program launch :CompetiTest run_no_compile
.
If you have previously closed the UI and you want to re-open it without re-executing testcases or recompiling launch :CompetiTest show_ui
.
- Run again a testcase by pressing
R
- Run again all testcases by pressing
<C-r>
- Kill the process associated with a testcase by pressing
K
- Kill all the processes associated with testcases by pressing
<C-k>
- View input in a bigger window by pressing
i
orI
- View expected output in a bigger window by pressing
a
orA
- View stdout in a bigger window by pressing
o
orO
- View stderr in a bigger window by pressing
e
orE
- Toggle diff view between actual and expected output by pressing
d
orD
Of course all these keybindings can be customized: see runner_ui
➤ mappings
in configuration
NOTE: to get this feature working you need to install competitive-companion extension in your browser.
Thanks to its integration with competitive-companion, CompetiTest can download contents from competitive programming platforms:
- Download only testcases with
:CompetiTest receive testcases
- Download a problem with
:CompetiTest receive problem
(source file is automatically created along with testcases) - Download an entire contest with
:CompetiTest receive contest
(make sure to be on the homepage of the contest, not of a single problem)
After launching one of these commands click on the green plus button in your browser to start downloading.
For further customization see receive options in configuration.
By default CompetiTest stores received problems and contests in current working directory. You can change this behavior through the options received_problems_path
, received_contests_directory
and received_contests_problems_path
. See receive modifiers for further details.
Here are some tips:
- Fixed directory for received problems (not contests):
received_problems_path = "$(HOME)/Competitive Programming/$(JUDGE)/$(CONTEST)/$(PROBLEM).$(FEXT)"
- Fixed directory for received contests:
received_contests_directory = "$(HOME)/Competitive Programming/$(JUDGE)/$(CONTEST)"
- Put every problem of a contest in a different directory:
received_contests_problems_path = "$(PROBLEM)/main.$(FEXT)"
- Example of file naming for Java contests:
received_contests_problems_path = "$(PROBLEM)/$(JAVA_MAIN_CLASS).$(FEXT)"
- Simplified file names, it works with Java and any other language because the modifier
$(JAVA_TASK_CLASS)
is generated from problem name removing all non-alphabetic and non-numeric characters, including spaces and punctuation:received_contests_problems_path = "$(JAVA_TASK_CLASS).$(FEXT)"
When downloading a problem or a contest, source code templates can be configured for different file types. See template_file
option in configuration.
Receive modifiers can be used inside template files to insert details about received problems. To enable this feature set evaluate_template_modifiers
to true
. Template example for C++:
// Problem: $(PROBLEM)
// Contest: $(CONTEST)
// Judge: $(JUDGE)
// URL: $(URL)
// Memory Limit: $(MEMLIM)
// Time Limit: $(TIMELIM)
// Start: $(DATE)
#include <iostream>
using namespace std;
int main() {
cout << "This is a template file" << endl;
cerr << "Problem name is $(PROBLEM)" << endl;
return 0;
}
Here you can find CompetiTest default configuration
require('competitest').setup {
local_config_file_name = ".competitest.lua",
floating_border = "rounded",
floating_border_highlight = "FloatBorder",
picker_ui = {
width = 0.2,
height = 0.3,
mappings = {
focus_next = { "j", "<down>", "<Tab>" },
focus_prev = { "k", "<up>", "<S-Tab>" },
close = { "<esc>", "<C-c>", "q", "Q" },
submit = { "<cr>" },
},
},
editor_ui = {
popup_width = 0.4,
popup_height = 0.6,
show_nu = true,
show_rnu = false,
normal_mode_mappings = {
switch_window = { "<C-h>", "<C-l>", "<C-i>" },
save_and_close = "<C-s>",
cancel = { "q", "Q" },
},
insert_mode_mappings = {
switch_window = { "<C-h>", "<C-l>", "<C-i>" },
save_and_close = "<C-s>",
cancel = "<C-q>",
},
},
runner_ui = {
interface = "popup",
selector_show_nu = false,
selector_show_rnu = false,
show_nu = true,
show_rnu = false,
mappings = {
run_again = "R",
run_all_again = "<C-r>",
kill = "K",
kill_all = "<C-k>",
view_input = { "i", "I" },
view_output = { "a", "A" },
view_stdout = { "o", "O" },
view_stderr = { "e", "E" },
toggle_diff = { "d", "D" },
close = { "q", "Q" },
},
viewer = {
width = 0.5,
height = 0.5,
show_nu = true,
show_rnu = false,
close_mappings = { "q", "Q" },
},
},
popup_ui = {
total_width = 0.8,
total_height = 0.8,
layout = {
{ 4, "tc" },
{ 5, { { 1, "so" }, { 1, "si" } } },
{ 5, { { 1, "eo" }, { 1, "se" } } },
},
},
split_ui = {
position = "right",
relative_to_editor = true,
total_width = 0.3,
vertical_layout = {
{ 1, "tc" },
{ 1, { { 1, "so" }, { 1, "eo" } } },
{ 1, { { 1, "si" }, { 1, "se" } } },
},
total_height = 0.4,
horizontal_layout = {
{ 2, "tc" },
{ 3, { { 1, "so" }, { 1, "si" } } },
{ 3, { { 1, "eo" }, { 1, "se" } } },
},
},
save_current_file = true,
save_all_files = false,
compile_directory = ".",
compile_command = {
c = { exec = "gcc", args = { "-Wall", "$(FNAME)", "-o", "$(FNOEXT)" } },
cpp = { exec = "g++", args = { "-Wall", "$(FNAME)", "-o", "$(FNOEXT)" } },
rust = { exec = "rustc", args = { "$(FNAME)" } },
java = { exec = "javac", args = { "$(FNAME)" } },
},
running_directory = ".",
run_command = {
c = { exec = "./$(FNOEXT)" },
cpp = { exec = "./$(FNOEXT)" },
rust = { exec = "./$(FNOEXT)" },
python = { exec = "python", args = { "$(FNAME)" } },
java = { exec = "java", args = { "$(FNOEXT)" } },
},
multiple_testing = -1,
maximum_time = 5000,
output_compare_method = "squish",
view_output_diff = false,
testcases_directory = ".",
testcases_use_single_file = false,
testcases_auto_detect_storage = true,
testcases_single_file_format = "$(FNOEXT).testcases",
testcases_input_file_format = "$(FNOEXT)_input$(TCNUM).txt",
testcases_output_file_format = "$(FNOEXT)_output$(TCNUM).txt",
companion_port = 27121,
receive_print_message = true,
template_file = false,
evaluate_template_modifiers = false,
date_format = "%c",
received_files_extension = "cpp",
received_problems_path = "$(CWD)/$(PROBLEM).$(FEXT)",
received_problems_prompt_path = true,
received_contests_directory = "$(CWD)",
received_contests_problems_path = "$(PROBLEM).$(FEXT)",
received_contests_prompt_directory = true,
received_contests_prompt_extension = true,
open_received_problems = true,
open_received_contests = true,
replace_received_testcases = false,
}
local_config_file_name
: you can use a different configuration for every different folder. See local configurationfloating_border
: for details see herefloating_border_highlight
: the highlight group used for popups borderpicker_ui
: settings related to the testcase pickerwidth
: a value from 0 to 1, representing the ratio between picker width and Neovim widthheight
: a value from 0 to 1, representing the ratio between picker height and Neovim heightmappings
: keyboard mappings to interact with picker
editor_ui
: settings related to the testcase editorpopup_width
: a value from 0 to 1, representing the ratio between editor width and Neovim widthpopup_height
: a value from 0 to 1, representing the ratio between editor height and Neovim heightshow_nu
: whether to show line numbers or notshow_rnu
: whether to show relative line numbers or notswitch_window
: keyboard mappings to switch between input window and output windowsave_and_close
: keyboard mappings to save testcase contentcancel
: keyboard mappings to quit testcase editor without saving
runner_ui
: settings related to testcase runner user interfaceinterface
: interface used to display testcases data. Can bepopup
(floating windows) orsplit
(normal windows). Associated settings can be found inpopup_ui
andsplit_ui
selector_show_nu
: whether to show line numbers or not in testcase selectorselector_show_rnu
: whether to show relative line numbers or not in testcase selectorshow_nu
: whether to show line numbers or not in details windowsshow_rnu
: whether to show relative line numbers or not in details windowsmappings
: keyboard mappings used in testcase selector windowrun_again
: keymaps to run again a testcaserun_all_again
: keymaps to run again all testcaseskill
: keymaps to kill a testcasekill_all
: keymaps to kill all testcasesview_input
: keymaps to view input (stdin) in a bigger windowview_output
: keymaps to view expected output in a bigger windowview_stdout
: keymaps to view programs's output (stdout) in a bigger windowview_stderr
: keymaps to view programs's errors (stderr) in a bigger windowtoggle_diff
: keymaps to toggle diff view between actual and expected outputclose
: keymaps to close runner user interface
viewer
: keyboard mappings used in viewer windowwidth
: a value from 0 to 1, representing the ratio between viewer window width and Neovim widthheight
: a value from 0 to 1, representing the ratio between viewer window height and Neovim heightshow_nu
: whether to show line numbers or not in viewer windowshow_rnu
: whether to show relative line numbers or not in viewer windowclose_mappings
: keymaps to close viewer window
popup_ui
: settings related to testcase runner popup interfacetotal_width
: a value from 0 to 1, representing the ratio between total interface width and Neovim widthtotal_height
: a value from 0 to 1, representing the ratio between total interface height and Neovim heightlayout
: a table describing popup UI layout. For further details see here
split_ui
: settings related to testcase runner split interfaceposition
: can betop
,bottom
,left
orright
relative_to_editor
: whether to open split UI relatively to entire editor or to local windowtotal_width
: a value from 0 to 1, representing the ratio between total vertical split width and relative window widthvertical_layout
: a table describing vertical split UI layout. For further details see heretotal_height
: a value from 0 to 1, representing the ratio between total horizontal split height and relative window heighthorizontal_layout
: a table describing horizontal split UI layout. For further details see here
save_current_file
: if true save current file before running testcasessave_all_files
: if true save all the opened files before running testcasescompile_directory
: execution directory of compiler, relatively to current file's pathcompile_command
: configure the command used to compile code for every different language, see hererunning_directory
: execution directory of your solutions, relatively to current file's pathrun_command
: configure the command used to run your solutions for every different language, see heremultiple_testing
: how many testcases to run at the same time- set it to
-1
to make the most of the amount of available parallelism. Often the number of testcases run at the same time coincides with the number of CPUs - set it to
0
if you want to run all the testcases together - set it to any positive integer to run that number of testcases contemporarily
- set it to
maximum_time
: maximum time, in milliseconds, given to processes. If it's exceeded process will be killedoutput_compare_method
: how given output (stdout) and expected output should be compared. It can be a string, representing the method to use, or a custom function. Available options follows:"exact"
: character by character comparison"squish"
: compare stripping extra white spaces and newlines- custom function: you can use a function accepting two arguments, two strings representing output and expected output. It should return true if the given output is acceptable, false otherwise. Example:
require('competitest').setup { output_compare_method = function(output, expected_output) if output == expected_output then return true else return false end end }
view_output_diff
: view diff between actual output and expected output in their respective windowstestcases_directory
: where testcases files are located, relatively to current file's pathtestcases_use_single_file
: if true testcases will be stored in a single file instead of using multiple text files. If you want to change the way already existing testcases are stored see conversiontestcases_auto_detect_storage
: if true testcases storage method will be detected automatically. When both text files and single file are available, testcases will be loaded according to the preference specified intestcases_use_single_file
testcases_single_file_format
: string representing how single testcases files should be named (see file-format modifiers)testcases_input_file_format
: string representing how testcases input files should be named (see file-format modifiers)testcases_output_file_format
: string representing how testcases output files should be named (see file-format modifiers)companion_port
: competitive companion port numberreceive_print_message
: if true notify user that plugin is ready to receive testcases, problems and contests or that they have just been receivedtemplate_file
: templates to use when creating source files for received problems or contests. Can be one of the following:false
: do not use templates- string with file-format modifiers: useful when templates for different file types have a regular file naming
template_file = "~/path/to/template.$(FEXT)"
- table with paths: table associating file extension to template file
template_file = { c = "~/path/to/file.c", cpp = "~/path/to/file.cpp", py = "~/path/to/file.py", }
evaluate_template_modifiers
: whether to evaluate receive modifiers inside a template file or notdate_format
: string used to format$(DATE)
modifier (see receive modifiers). The string should follow the formatting rules as per Lua'sos.date
function. For example, to get06-07-2023 15:24:32
set it to%d-%m-%Y %H:%M:%S
received_files_extension
: default file extension for received problemsreceived_problems_path
: path where received problems (not contests) are stored. Can be one of the following:- string with receive modifiers
- function: function accepting two arguments, a table with task details and a string with preferred file extension. It should return the absolute path to store received problem. Example:
received_problems_path = function(task, file_extension) local hyphen = string.find(task.group, " - ") local judge, contest if not hyphen then judge = task.group contest = "unknown_contest" else judge = string.sub(task.group, 1, hyphen - 1) contest = string.sub(task.group, hyphen + 3) end return string.format("%s/Competitive Programming/%s/%s/%s.%s", vim.loop.os_homedir(), judge, contest, task.name, file_extension) end
received_problems_prompt_path
: whether to ask user confirmation about path where the received problem is stored or notreceived_contests_directory
: directory where received contests are stored. It can be string or function, exactly asreceived_problems_path
received_contests_problems_path
: relative path from contest root directory, each problem of a received contest is stored following this option. It can be string or function, exactly asreceived_problems_path
received_contests_prompt_directory
: whether to ask user confirmation about the directory where received contests are stored or notreceived_contests_prompt_extension
: whether to ask user confirmation about what file extension to use when receiving a contest or notopen_received_problems
: automatically open source files when receiving a single problemopen_received_contests
: automatically open source files when receiving a contestreplace_received_testcases
: this option applies when receiving only testcases. If true replace existing testcases with received ones, otherwise ask user what to do
You can use a different configuration for every different folder by creating a file called .competitest.lua
(this name can be changed configuring the option local_config_file_name
). It will affect every file contained in that folder and in subfolders. A table containing valid options must be returned, see the following example.
-- .competitest.lua content
return {
multiple_testing = 3,
maximum_time = 2500,
testcases_input_file_format = "in_$(TCNUM).txt",
testcases_output_file_format = "ans_$(TCNUM).txt",
testcases_single_file_format = "$(FNOEXT).tc",
}
Modifiers are substrings that will be replaced by another string, depending on the modifier and the context. They're used to tweak some options.
You can use them to define commands or to customize testcases files naming through options testcases_single_file_format
, testcases_input_file_format
and testcases_output_file_format
.
Modifier | Meaning |
---|---|
$() |
insert a dollar |
$(HOME) |
user home directory |
$(FNAME) |
file name |
$(FNOEXT) |
file name without extension |
$(FEXT) |
file extension |
$(FABSPATH) |
absolute path of current file |
$(ABSDIR) |
absolute path of folder that contains file |
$(TCNUM) |
testcase number |
You can use them to customize the options received_problems_path
, received_contests_directory
, received_contests_problems_path
and to insert problem details inside template files. See also tips for customizing folder structure for received problems and contests.
Modifier | Meaning |
---|---|
$() |
insert a dollar |
$(HOME) |
user home directory |
$(CWD) |
current working directory |
$(FEXT) |
preferred file extension |
$(PROBLEM) |
problem name, name field |
$(GROUP) |
judge and contest name, group field |
$(JUDGE) |
judge name (first part of group , before hyphen) |
$(CONTEST) |
contest name (second part of group , after hyphen) |
$(URL) |
problem url, url field |
$(MEMLIM) |
available memory, memoryLimit field |
$(TIMELIM) |
time limit, timeLimit field |
$(JAVA_MAIN_CLASS) |
almost always "Main", mainClass field |
$(JAVA_TASK_CLASS) |
classname-friendly version of problem name, taskClass field |
$(DATE) |
current date and time (based on date_format ), it can be used only inside template files |
Fields are referred to received tasks.
Languages as C, C++, Rust, Java and Python are supported by default.
Of course you can customize commands used for compiling and for running your programs. You can also add languages that aren't supported by default.
require('competitest').setup {
compile_command = {
cpp = { exec = 'g++', args = {'$(FNAME)', '-o', '$(FNOEXT)'} },
some_lang = { exec = 'some_compiler', args = {'$(FNAME)'} },
},
run_command = {
cpp = { exec = './$(FNOEXT)' },
some_lang = { exec = 'some_interpreter', args = {'$(FNAME)'} },
},
}
See file-format modifiers to better understand how dollar notation works.
NOTE: if your language isn't compiled you can ignore compile_command
section.
Feel free to open a PR or an issue if you think it's worth adding a new language among default ones.
You can customize testcase runner user interface by defining windows positions and sizes trough a table describing a layout. This is possible both for popup and split UI.
Every window is identified by a string representing its name and a number representing the proportion between its size and the sizes of other windows. To define a window use a lua table made by a number and a string. An example is { 1.5, "tc" }
.
Windows can be named as follows:
tc
for testcases selectorsi
for standard inputso
for standard outputse
for standard erroreo
for expected output
A layout is a list made by windows or layouts (recursively defined). To define a layout use a lua table containing a list of windows or layouts.
Sample code | Result |
---|---|
layout = {
{ 2, "tc" },
{ 3, {
{ 1, "so" },
{ 1, "si" },
} },
{ 3, {
{ 1, "eo" },
{ 1, "se" },
} },
} | |
layout = {
{ 1, {
{ 1, "so" },
{ 1, {
{ 1, "tc" },
{ 1, "se" },
} },
} },
{ 1, {
{ 1, "eo" },
{ 1, "si" },
} },
} |
When using split UI windows name can be displayed in statusline or in winbar. In each CompetiTest buffer there's a local variable called competitest_title
, that is a string representing window name. You can get its value using nvim_buf_get_var(buffer_number, 'competitest_title')
.
See the second screenshot for an example statusline used with split UI.
You can customize CompetiTest highlight groups. Their default values are:
hi CompetiTestRunning cterm=bold gui=bold
hi CompetiTestDone cterm=none gui=none
hi CompetiTestCorrect ctermfg=green guifg=#00ff00
hi CompetiTestWarning ctermfg=yellow guifg=orange
hi CompetiTestWrong ctermfg=red guifg=#ff0000
- Manage testcases
- Add testcases
- Edit testcases
- Delete testcases
- Store testcases in a single file
- Store testcases in multiple text files
- Convert single file into multiple text files and vice versa
- Run testcases
- Support many programming languages
- Handle compilation if needed
- Run multiple testcases at the same time
- Run again processes
- Kill processes
- Display results and execution data in a popup UI
- Display results and execution data in a split window UI
- Handle interactive tasks
- Configure every folder individually
- Integration with competitive-companion
- Download testcases
- Download problems
- Download contests
- Customizable folder structure for downloaded problems and contests
- Templates for files created when receiving problems or contests
- Integration with tools to submit solutions (api-client or cpbooster)
- Write Vim docs
- Customizable highlights
- Resizable UI
If you have any suggestion to give or if you encounter any trouble don't hesitate to open a new issue.
Pull Requests are welcome! 🎉
GNU Lesser General Public License version 3 (LGPL v3) or, at your option, any later version
Copyright © 2021-2023 xeluxee
CompetiTest.nvim is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
CompetiTest.nvim is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with CompetiTest.nvim. If not, see https://www.gnu.org/licenses/.