Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tips for using K #4655

Open
Stevengre opened this issue Oct 4, 2024 · 0 comments
Open

Tips for using K #4655

Stevengre opened this issue Oct 4, 2024 · 0 comments
Assignees

Comments

@Stevengre
Copy link
Contributor

Here are some issues I encountered when writing the paper and their solutions. Some of this content might need to be updated in the documentation. Most of the solutions are provided by @tothtamas28.

Kdist

kbuild is no longer maintained, consider using kdist instead: https://github.com/runtimeverification/k/blob/master/pyk/src/pyk/kdist/README.md

kbuild lets you save parameters for calling kompile into a kbuild.toml file.

On top of that, it is opinionated in where the output artifacts are stored, and how dependencies between K projects are handled.

This is sufficient for small projects, but typically we want to parameterize the build, build native dependencies, etc.

kdist is a very simple build tool that lets you distribute build scripts with Python projects.

It uses the Python plugin mechanism to extend the CLI provided by the tool with build targets defined by the user.

Here's an example.
https://github.com/runtimeverification/wasm-semantics/blob/5aa5885fddb9b2002889d821896052065fab5491/pykwasm/pyproject.toml#L19-L20

https://github.com/runtimeverification/wasm-semantics/blob/5aa5885fddb9b2002889d821896052065fab5491/pykwasm/src/pykwasm/kdist/plugin.py

If you install this Python package and run kdist list, all the targets defined in plugin.py will be listed, and you can call kdist build to build them.

Tips when arguments not supported yet.

I discovered the --bison-stack-max-depth option, but there doesn’t seem to be any documentation or help available for it. I’m particularly struggling to set it using kdist, as bison-stack-max-depth=10000000 doesn’t work. Does anyone know where I can find all the setup options?

I thought kompilation went through, but the parser is not restricted in stack size.

The issue here is that pyk does not support this kompile option yet.

It's an easy fix though, please open a PR for this.

Here's similar change from earlier with another flag: c61206d

Only the changes in pyk/src/pyk/ktool/kompile.py are necessary.

Tips for performance

Profiler is probably not what you want, it gives you a breakdown of how much time is spent in each function during execution.

An unfortunately, doing profiling has a noticeable overhead on execution time.

You can consider using this module, but even that is not necessary: https://docs.python.org/3/library/timeit.html

Tips for rule coverage testing

There are some scripts in pyk to process coverage data emitted by the frontend.

https://github.com/runtimeverification/k/blob/master/pyk/src/pyk/coverage.py

https://github.com/runtimeverification/k/blob/master/pyk/src/pyk/kcovr.py

coverage is even exposed on the CLI:

def exec_coverage(options: CoverageOptions) -> None:

But kcovr too can be called from the command line:
poetry run python3 -m pyk.kcovr ...

@geo2a
The coverage tooling in K, as far as I'm aware, works as follows:
build the semantics with kompile --backend llvm --coverage.
run a program with krun --- this will put a coverage.txt into the *-kompiled directory. The same coverage.txt will be updated on consecutive calls to krun
the tools such as kcovr take this coverage.txt and the kompiled definition and generate a nice coverage report

To build any kind of custom tooling one needs to work with coverage.txt and compiled.txt/definition.kore --- that is the lowest level.

I have no knowledge how compiled.txt is generated --- there must be some sort of instrumentation of the LLVM interpreter happening

@ehildenb
Generating coverage reports will impact execution speed, since it needs to write information to a file every rewrite step. So you definitely want to measure those separately. Doing kompile ... --coverage ... is the way to get the coverage reports, and then using kcovr ... tool processes them to get you an XML report on coverage information.

I believe that each invocation of krun should produce a separate coverage_PID.txt file, where PID is the process ID of that krun process.

top_cell_initializer

def top_cell_initializer(config: Mapping[str, Pattern]) -> App:
return init_generated_top_cell(
map_pattern(
*(
(
inj(SORT_K_CONFIG_VAR, SORT_K_ITEM, k_config_var(key)),
value,
)
for key, value in config.items()
)
)
)

This calls
def init_generated_top_cell(pattern: Pattern) -> App:
return App(LBL_INIT_GENERATED_TOP_CELL, (), (pattern,))

So in the end, a KORE term LblinitGeneratedTopCell{}() is created. There are rewrite rules in definition.kore to expand this into a configuration, based on the map passed as argument.
I'm not sure what happens if a config var is not defined in the map (the rewrite will get stuck probably), but if for some config var a symbolic value is set, the (concrete) interpreter will definitely crash.

gen-glr-bison-parser

--gen-glr-parser generates a parser binary:

$ kast parser --gen-glr-parser --definition imp-kompiled
Given pgm.imp:
int x;
x = 1;
./parser pgm.imp
Lblint'UndsSClnUnds'{}(Lbl'UndsCommUnds'{}(\dv{SortId{}}("x"),Lbl'Stop'List'LBraQuotUndsCommUndsQuotRBra'{}()),Lbl'UndsEqlsUndsSCln'{}(\dv{SortId{}}("x"),inj{SortInt{}, SortAExp{}}(\dv{SortInt{}}("1"))))

It's a bit confusing since this feature reuses the kast CLI for a purpose completely different than conversion between formats.

kompile --gen-glr-bison-parser might provide the same feature (or something different, I'm no sure).

But anyways, if you already have a kdist setup, you can add a target for generating the program parser.

kast takes --module and --sort, my guess is you can use those flags to specify the parsed sort.

K_OPTS

K_OPTS+=-Xmx200g

More for parsers

K syntaxes can be contextual. GLR parsers are more general.

You can also pass --gen-bison-parser instead of --gen-glr-parser to get a not glr parser, but if your grammar is not parsable with an lr parser, it won't work very well (will get lots of ambiguities)

When you use the prec attribute, or prefer and avoid, for example, I think that makes you contextual.

Tips for optimizing

I don't think deep nesting should really cause a performance issue with parsing....

How many example programs do you have to try it with? I think having a bunch which use different features of the language could help pinpoint why it's slow.

And you can try terms in different sorts/subsorts (and generate the parsers for those sorts/subsorts), and see which ones are slow.

So for eth above terms, break it down into subterms, and make a parser for each subterm sort, and see if those ones are slow.

Do you have lots of token in your grammar perhaps? Or lots of regex?

I guess that ::= Dimension "x" [token] looks pretty weird.... token should only go on individual token productions ,not ones with arguments. I didn't even know that was supported/allowed.

All of those regexs may overlap, and slow down scanning, my guess is that it could be a big source of issues.

ideas for optimization

For “prefer” and “avoid,” I think there’s room for optimization. In my case, it seems like just swapping out the keywords could do the trick.

prec primarily focuses on the lexer, and I think there should be some tricks to it.

Consider something like x should be declared before used. They looks not ‘real’ contextual.
int x;
x = 0

@Stevengre Stevengre self-assigned this Oct 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant