Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A cornucopia of optimisations #98

Open
wants to merge 40 commits into
base: legacy
Choose a base branch
from

Conversation

JobLeonard
Copy link
Collaborator

This combines various suggestions from @Rycochet and @lishid and myself. See #88 and #97.

  • replace string-based keyStr.charAt(i) with array-based keyStr[i]
  • add compressToArray and decompressFromArray functions, and use them throughout the code to avoid intermediate string creation:
    • compressToBase64 avoids appending to string
    • compressToUint8Array avoids an intermediate string
    • decompressFromUint8Array avoids an intermediate string

Similar optimisations have been applied to base64-string.js, but it doesn't have tests so I can't say for sure if it's bug-free.

Instead of:

- function call
- conditional check for alphabet existence
- dictionary lookup
- dictionary lookup

... the new version only does one dictionary lookup. Should lower overhead.
…s tweaks

To avoid creating a string, we introduce two new functions, `compressToArray` and `decompressFromArray`. The former can easily be wrapped by `compress` to minimize code duplication. Sadly `decompressFromArray` isn't wrapped so easily. To alleviate this somewhat, we introduce the private `_chrToOutput` function. Note that it's <500 chars, before minification, meaning that v8 will likely inline it anyway.
Slightly faster on most browsers.
-  compressToBase64 avoids string append
- compressToUint8Array avoids intermediate string
- decompressFromUint8Array avoids intermediate string

As a side-effect, the UTF16 functions get slightly more complicated, but the only alternative was a *lot* of code duplication.
@JobLeonard
Copy link
Collaborator Author

Some benchmarks:

plain compress/decompresss:
http://jsbench.github.io/#2a07fd14b55d44291da4b06d3ba6e5c3

base64:
http://jsbench.github.io/#da32b5d1100c24c7744bb58ca3fff440

UTF16:
http://jsbench.github.io/#4029cae03b1e2fc06ad44e35f5bfca6b

Uri:
http://jsbench.github.io/#54c40822dabdbd46a93fb0b7ff6832d9

Uint8:
http://jsbench.github.io/#14395aeb1f452cefa52b5f86b0f644c5

I'd say performance is.. probably a bit better, but also almost negligible within the margin or error.

Given the changes to the code, there must be less allocation going on, so that must have improved a bit at least.

I had them all in one big benchmark, but then I tried to profile them using DevTools. Those tools then suggested that compress and decompress were deoptimised! The reason, I think, is because I used all functions, leading to a different closure being passed each time, leading to de-opting the compress function so often that V8 was like "screw this!". So I split it into different benchmarks hoping to mitigate that, since I figure real-life usage doesn't usually use all functions, and because it's unclear when the de-opt hits, making the benchmark less reliable.

Please check if the input data is relevant - I basically took the test string (the one about tattoo's) and tripled it, causing a lot of redundancy.

@Rycochet
Copy link
Collaborator

I was recently optimising some startup code in an app, and found that the GC could actually have significant impact on things (up to 200ms for 16mb) - but as it's not something easily forced it's hard to get consistent profiling to include that unless running a significantly longer test including setImmediate() style callbacks and known consistent memory allocation and freeing.

TL/DR: Reducing the need for GC is always good ;-)

@JobLeonard
Copy link
Collaborator Author

The main issue is that it's not easy to see if it's significant in the larger scheme of things - perhaps all computation and memory allocation happens in the core algorithm that I didn't touch.

I think I should set up a plain website with this code, run it through both functions (let's just use the plain compress/decompress as a baseline) with a yuuuge string repeatedly, while running the browser performance tools to keep track of allocations and such. That might give a better overview.

@tophf
Copy link

tophf commented Jun 27, 2017

Deoptimization is triggered by various patterns (some links: Deoptimization in V8, Optimization-killers). In your tests, compress invocations supposedly use the same parameter signature/types so the actual problem is within the function itself. In my experience, even if there are no obvious deopt triggers in the code, it may be caused by the sheer size of the function and splitting solved that for me. Preventing deoptimization is very important as its impact often/usually negates any perf gains, but unfortunately I'm not an expert either.

@JobLeonard
Copy link
Collaborator Author

Hmm, time to dive into this a little deeper and see if we can fix that. Still on sick leave anyway so this is a fun side-track :)

@JobLeonard
Copy link
Collaborator Author

The thing I was referring to is that compress is passed a function, which itself is usually a closure. I suspect that level of dynamism prevents some deeper optimisations.

This is pretty old, so probably somewhat outdated, but I'll look into it later:

http://mrale.ph/blog/2012/09/23/grokking-v8-closures-for-fun.html

@tophf
Copy link

tophf commented Jun 27, 2017

Ah, indeed, no deopt on any of the functions if I test in devtools manually via copypaste, a primitive 100-rep loop, and console.time. The difference is negligible just the same. FWIW when switching to ES6 Map and Set in lz-string I clearly see the advertised 2x gain.

@JobLeonard
Copy link
Collaborator Author

JobLeonard commented Jun 28, 2017

Related to that, I just realised there's another optimisation possible for Base64 and URI-safe compressors: don't use charAt, use charCodeAt. The reason is that looking up integers in hashmaps is much faster than strings. I have some test code below:

// SETUP
var base64 = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";

var charDict = (function (){
    var dict = {};
    for (var i = 0; i < base64.length; i++){
        dict[base64.charAt(i)] = i;
    }
    return dict;
})();

var charCodeDict = (function () {
    var dict = {};
    for (var i = 0; i < base64.length; i++){
        dict[base64.charCodeAt(i)] = i;
    }
    return dict;
})();

var randomString = (function (){
    var i = 100000, strArray = new Array(i);
    while(i--){
        strArray[i] = base64.charAt(Math.random(base64.length)|0);
    }
    return strArray.join('');
})();

var converted = 0;

//BENCHMARKS

// charDict
for (var i = 0; i < randomString.length; i++){
    converted = charDict[randomString.charAt(i)];
}

// charCodeDict
for (var i = 0; i < randomString.length; i++){
    converted = charCodeDict[randomString.charCodeAt(i)];
}

http://jsbench.github.io/#a5c234621b81cd41b26e31d7f92f62d4

On my machines integer key lookups are almost 3x faster.

For Base64 and URI this is a drop-in replacement because charAt is only used as an intermediate dictionary lookup.

I'll look up the LZW algorithm, read the current implementation of it, and see if there are more places where we can use charCodeAt instead of charAt in this way.

@JobLeonard
Copy link
Collaborator Author

@tophf: can you show me what the faster Map and Set code looks like? Because when I added Map to my previous benchmarks it turned out much slower than even plain object lookup there:

Anyway, small correction: for Basae64 and URI decompression this is a drop-in replacement.

So looking into this a bit further, there are two different ways we can apply this change:

Returning an array of charCodes instead of chars

This might be an optimisation, it might not be. Essentially, all we need to know is whether String.fromCharCode.apply(null, charCodeArray)) is significantly slower than charArray.join('').

If not, we can apply this optimisation safely, essentially replacing my compressToArray code with compressToCharCodeArray. Also, this would simplify compressToUint8Array even further:

  compressToUint8Array: function (uncompressed) {
    //var compressed = LZString.compressToArray(uncompressed);
    var compressed = LZString.compressToCharCodeArray(uncompressed);
    var buf=new Uint8Array(compressed.length*2); // 2 bytes per character

    for (var i=0, TotalLen=compressed.length; i<TotalLen; i++) {
      //var current_value = compressed[i].charCodeAt(0);
      var current_value = compressed[i];
      buf[i*2] = current_value >>> 8;
      buf[i*2+1] = current_value % 256;
    }
    return buf;
  },

I'll write a benchmark to test this later, and if it does not immediately make me think it's worse I'll implement it, and benchmark it again.

Replace the LZW dictionary with a trie

Oh goodness, if this works like I think it works, this could be big!

This applies to _compress, or _compressToArray in my branch.

So this requires a rudimentary understanding of the actual LZW algorithm. I'm using this pseudocode explanation as a guide:

https://www.cs.duke.edu/csed/curious/compression/lzw.html

Basically, LZW is about building a symbol table of prefixes (a JS object), and replacing those prefixes with smaller values in the output stream (an array of integers that then is converted into chars). In a "skipping the crucial-but-currently-not-relevant details" way:

  • scan through the input string
  • keep track of the current prefix through context_w, which starts as an empty string
  • store the current character as context_c
  • Object.prototype.hasOwnProperty.call(context_dictionary,context_w + context_c)?
    • true
      1. context_w = context_w + context_c
      2. continue
    • false
      1. write the value associated with key context_w to the output array
      2. add context_w + context_c as a new prefix to the symbol table
      3. context_w = context_c
      4. continue

I'm skipping a lot stuff - there's a reason the _compress function is over 200 lines of code! But this optimisation doesn't apply to it.

So charAt comes into play at the context_c level. Can we replace it with charCodeAt? Yes, I think we can! To do so, we replace the flat dictionary object with a trie which has nodes that use charCodes instead of chars as keys:

  • scan through the input string
  • keep track of the current prefix through context_node, which starts at the root of the dictionary, initialised as {}
  • store the current charCode as context_c
  • context_node[context_c] !== undefined?
    • true
      1. context_node = context_node[context_c]
      2. continue
    • false
      1. write context_node[-1] to the output array
      2. context_node[context_c] = { val: <new prefix val> }
      3. context_node = root
      4. continue

So each node will just consist of numerical charCode keys, plus one -1 key¹ to store the actual integer value of the prefix.

The gain here consists of two things:

  • no string appending, so we should have no unnecessary object allocation.
  • integer keys instead of string keys, which as the benchmarks I linked above demonstrate are much faster.

And it gets better! Right now there are a lot of calls to Object.prototype.hasOwnProperty.call(). I presume this is to avoid the risk of accidentally matching one of the keys of Object.prototype:

image

In the new version we'll never use strings for our keys, so there will never be any such collisions! Which means we can safely remove all of these calls and just use object[charCodeAt] !== undefined instead, which, as you can see in the second benchmark I linked at the top, blows everything else out of the water!

¹ If we use a string key we lose all benefits of using only numerical lookups - I tested this in benchmarks. Since charCodeAt is guaranteed to return a number between 0 and 65535 that means we can use either -1 or 65336. Using 65336 makes the hasing very slow, for whatever reason, but -1 has no negative effect.

Anyway, just writing this down took quite a bit of energy, but now I have an idea of what to do. I'll start working on this tomorrow and see if it turns out as good as I hope it will!

@tophf
Copy link

tophf commented Jun 29, 2017

can you show me what the faster Map and Set code looks like?

https://gist.github.com/tophf/e8962e43efe35233212cf04d8d7cd317

2x speedup compared to the nonminified original version as tested on compressToUTF16/decompressFromUTF16 in modern versions of Chrome. Didn't test other functions nor other browsers. The measurements were performed in real code that compressed a megabyte or so of HTML.

Essentially, all we need to know is whether String.fromCharCode.apply(null, charCodeArray)) is significantly slower than charArray.join('').

The stack is still used for the arguments passed via .apply so anything larger than 32k is a risk depending on the browser. And even that is a risk so I usually do in 1k chunks, then join them.

@JobLeonard
Copy link
Collaborator Author

Thanks, pretty straightforward.

I guess the reason they're not faster in my benchmarks is because I'm limiting myself to 64 single-character keys. Still, the charCodeAt() optimisations are kind of orthogonal to Map and Set so we could even apply both if necessary!

(I wonder @pieroxy is on holiday or something, and will come back thinking "I look away for one second and then this happens?")

@JobLeonard
Copy link
Collaborator Author

The stack is still used for the arguments passed via .apply so anything larger than 32k is a risk depending on the browser. And even that is a risk so I usually do in 1k chunks, then join them.

Ooof... that sounds like performance would fluctuate wildly among browsers and hardware. I'll focus on the other possible optimisations first.

@JobLeonard
Copy link
Collaborator Author

Some more thoughts on that de-opting:

let factory = function(){
    return function(){
        return 0;
    }
};
let a = factory();
let b = factory();
let c = a;

a === b; //false
a === c; //true

Right now we pass new functions on every call to any compressor/decompressor. If we hoist that we can maybe make things easier for the JIT compilers. However, current set-up requires it because it's getCharFromInt and getNextValue close over the passed input or compressed strings:

  //decompress from uint8array (UCS-2 big endian format)
  decompressFromUint8Array:function (compressed) {
    if (compressed===null || compressed===undefined){
        return LZString.decompressFromArray(compressed);
    } else if (compressed.length == 0){
      return null;
    }
    return LZString._decompress(compressed.length, 128, function (index) { return compressed[index]; });
  },

That would require a bit of rewriting too.

@pieroxy
Copy link
Owner

pieroxy commented Jun 30, 2017

First of all thanks for all the work here. I am eager to see the end result :-)

Just as a side note, be careful when you advertise a x2 increase. It may be so in your browser (lets say Chrome) but then the same code might actually be slower in I.E. or Firefox. When I did a pass at perf optimisations back in the days I probably created 25 jsperf benchmarks and had many surprises on that front.

That said, all this looks promising.

@JobLeonard
Copy link
Collaborator Author

Just as a side note, be careful when you advertise a x2 increase.

Right, I should have been more clear that I when I talked about 2x/3x speed-ups was only referring to the micro-benchmarks on (effectively) string vs integer lookup, which is known to be a relatively recent optimisation in browser engines. It doesn't reflect on the whole algorithm or performance across all browsers.

OTOH, the expected lower memory overhead of removing string concatenation (context_wc = context_w + contextc) from the core algorithm is probably a bigger deal for old browsers, since browsers didn't really optimise internal string representation until a few years back. So the impact of that in terms of reduced memory overhead should be bigger for older browsers.

At the moment I'm still trying to decipher what you do in the algorithm - specifically lines 137 - 204. It's barely commented, so I only have your high-level description from the main page to go by

  • I initialize the dictionary with three tokens:
    • An entry that produces a 16-bit token.
    • An entry that produces an 8-bit token, because most of what I will store is in the iso-latin-1 space, meaning tokens below 256.
    • An entry that mark the end of the stream.
  • The output is processed by a bit stream that stores effectively 16 bits per character in the output string.
  • Each token is stored with just as many bits that are needed according to the size of the dictionary. Hence, the first token takes 2 bits, the second to 7th three bits, etc....

(once it clicks for me I'm going to add a modified version of the paragraph below that a documentation comments in the code for later maintainers)

@JobLeonard
Copy link
Collaborator Author

At least the bugs are funny:

image

@JobLeonard
Copy link
Collaborator Author

So I figured out a simple way to do in-browser line-by-line performance testing: inline lz-string.js into SpecRunner.html, open the latter, then use the browser dev tool to measure performance for x refreshes. Important: the numbers in the screenshots should be compared only relative to the other numbers in the same screenshot, because I'm not refreshing exactly the same number of times!***

First bottleneck I found: -1 and -2 indexing is slow after all:

image

Then I realised "Hey, why not just use a +2 offset on charCode for the lookup?"

image

Again: I didn't refresh the same number of times, so you can't compare screenshots directly. Now, new_node[0] = dictSize++ might look like it's still pretty high, but I think that's the creation of the bucket for the hashtable, which explains why the line after is so fast in comparison.

So then I tried the latest version vs the old version on JSBench, with the profiler on to measure memory use, and using a tripled 'During tattooing...' test string from the tests. Can you spot the point where it goes from old to new in graph?

image

Anyway, moving on to the _decompressor next, and then once that is done, check if we can simplify the code (I think going from a flat dictionary to a trie might lead to mismatches in "code quirkiness")

@JobLeonard
Copy link
Collaborator Author

JobLeonard commented Jul 1, 2017

So just to add to pieroxy's remark that this is complicated, I did some in-between benchmarking on Chrome and Firefox. I'm using all the test strings except Hello World, so:

  • tattoo description (real world text)
  • 1000 random floating point nrs (long string with some repetition)
  • 'aaaaabaaaaacaaaaadaaaaaeaaaaa' (short string with lots of repetition)
  • 'aaaaabaaaaacaaaaadaaaaaeaaaaa' times 1024 (a long string with lots of repetition)
  • all printable UTF16 (represents a worst case: long string, no repetition)

http://jsbench.github.io/#e25c445d987295b0114407e457dde9ad

EDIT: Split out the short string case, because it was distorting the graph by being one to two orders of magnitude faster than the rest
http://jsbench.github.io/#4a2ca3e9e3e9ee544f5b76acc7699ef8

On Chrome, the picture is complicated: the new code is slower in quite a few situations, but for long strings with some repetition (1000 random floats) or lots of repetition ('aaa..') it's faster.

On Firefox the picture is a lot rosier, not to mention generally a lot more performant: it's always better, and gets better as strings have more repetition and get longer.

The UTF16 example is actually really revealing, since it is mostly about overhead of filling up a dictionary.

I really hope I can optimise it further so it has either zero regressions in Chrome, or only in insignificant contexts (like really short strings).

image

image

EDIT: For mobile, the improvement is a lot clearer:

Chrome Mobile for Android, everything is faster:
2017-07-01 11 53 50
2017-07-01 11 48 25

(I'm using an autostitcher to put these screenshots together, hence some of the weirdness)

On Firefox Mobile for Android, short strings are slower, everything else is faster:
2017-07-01 11 43 01

Yes, this is pre- and post-fix increment/decrement abuse. But we're being low level enough here that I think we can handle it.
gloryknight and others added 3 commits July 11, 2018 00:15
Replace StringStream object with global variables. Optimize arrays generation. About 5% smaller minified file. About twice faster in compressing large files.
@gloryknight
Copy link

Indeed, as I have suggested. Minified version is much faster (at least in Chrome). Well done.

@franciscop
Copy link

This progress is amazing! I'm using lz-string (actually lznext, which is just a thin wrapper for import) and the only small disadvantage of lz-string is that it's a bit slow. I've added a ~100kb limit for compressing text (around 60ms) in my library brownies to avoid blocking the main thread:

export const pack = str => {
  // ...

  // Compress it only for relatively small strings, since compression is O(N)
  //   so it takes too long for large strings
  if (str.length < 100 * 1000) {
    str = lz + LZString.compressToUTF16(str);
  }
  return str;
};

@JobLeonard
Copy link
Collaborator Author

@franciscop: for the record, I wrote an async version - it's slower but non-blocking

@franciscop
Copy link

That's pretty cool. Unfortunately that would imply a full rewrite of my library and changing the API radically, since this would no longer be possible:

import { local } from 'brownies';

local.id = 10;
console.log(local.id);

I'm using Proxy internally for that setter, and a setter cannot be awaited... (I could set a local cache and do sync on the background, but that's a totally different can of worms).

@paultman
Copy link

Any idea when this going to be merged to master and a 2.0 release available?
I'm currently using localstorage and would like to move to localForage and maybe optimize the string values using this compression as well. I know it would help with localStorage, and imagine it should be the same with the localForage wrapper as well.

@anonyco
Copy link

anonyco commented Jul 5, 2019

Use ASM.JS integer optimizations to further increase speed by a few percentage points. Both Chrome and Firefox have considerations for integer optimizations. Observe the benchmark below.

requestIdleCallback(function(){
    for (var i=0; i<16777216; i=i+1|0) void 0; // warm up

	const console = window.console;
	
	function runWithoutSpeed(){
		console.time("SlowerVersion");
		for (var i=-4194304,j=4194304; i < j; i+=2, j-=2) void i + j;
		console.timeEnd("SlowerVersion");
    }
	function runOptimizedTest() {
		console.time("OptimizedTest");
		for (var i=-4194304,j=4194304; i < j; i=i+2|0, j=j-2|0) void (i+j|0);
		console.timeEnd("OptimizedTest");
    }
	runOptimizedTest();
	runWithoutSpeed();
}, {timeout: 11});

To do integer optimizations, you must explicitly cast the number to an integer after every pass via function arguments, every addition/subtraction chain, every division. Also, avoid multiplication wherever possible. Use bit-shifts instead.

@Rycochet
Copy link
Collaborator

Rycochet commented Jul 6, 2019

Output in my Chrome -

OptimizedTest: 2.701171875ms
SlowerVersion: 3.35595703125ms

@anonyco
Copy link

anonyco commented Jul 7, 2019

Output in my Chrome -

OptimizedTest: 4.52978515625ms
SlowerVersion: 5.72216796875ms

Output in my Firefox:

OptimizedTest: 2ms
SlowerVersion: 3ms

Indeed, integer optimizations do make Javascript faster.

@JobLeonard
Copy link
Collaborator Author

That's not really asm.js, but integer optimization (or SMI optimization if you're on 32 bit V8). Sadly, "proper" asm.js is a bit more involved to write. It's really hard to do right, and not really practical for our case: we need to initiate our own heap, do memory management, and wait for the asm.js compiler to do its thing (which is pretty slow). The start-up time might be worse than the actually performance boost, and it is very likely that we end up making a mistake and the whole thing turns back into regular JS.

I actually was trying to clean up the PR (locally, on my machine) a month ago, and finally got around to implementing an unsafe variant that made use of typed arrays to see if that was even faster. It was:

image

However, in the process I discovered another bug in the new version, and I haven't managed to fix it in the meantime.

@anonyco
Copy link

anonyco commented Jul 7, 2019

"Proper" asm.js code is impossible to write for these purposes because we are working with strings which are foreign to asm.js. 👍 Nevertheless, integer optimizations do exist. I am very glad to hear that the Uint32Array version is much faster, but I know that it could be a lot faster if we put integer optimizations into both the loops and variables surrounding the Uint32Arrays. Although integer optimizations have rather little consequence on loops, their effect is much more pronounced on typed arrays. For example, var val=int32ArrayInst[0] + 1; is horribly slow because the browser has to put in extra checks to ensure that int32ArrayInst is a typed array and checks to widen val's type to a double if int32ArrayInst[0] is over 2147483646. However, we (the programmers) know both that int32ArrayInst is a typed array and that 2147483646 will never be approached, so we can apply integer optimizations into var val=(int32ArrayInst[0]|0) + 1|0; and make the code run much faster. This sounds crazy and messed up, but that's Javascript for you.

If you could please post your LZStringUnsafe32, I could take a look at it for bugs and optimizations. Please explain to me the bug that you are having and please at least give me a chance to take you on a tour of the dark side of Javascript numbers.

@jthoward64
Copy link

I take it that this PR isn't getting merged in anytime soon, but is the code in JobLeonard/lz-string safe to download and use directly? Or should I stick with the version on npm/main branch here?

@JobLeonard
Copy link
Collaborator Author

Yeah, sorry, the truth is that I'm just terrified of breaking something because I never published a package to NPM before and this is a package that still gets used in tons of places.

But that isn't really about the code. In that regard it passes all existing tests, should be perfectly backwards compatible, and Stylus uses it in their code-base (or at least used to, I don't know if they still do) and afaik it hasn't given them any trouble.

@jthoward64
Copy link

Yeah, sorry, the truth is that I'm just terrified of breaking something because I never published a package to NPM before and this is a package that still gets used in tons of places.

But that isn't really about the code. In that regard it passes all existing tests, should be perfectly backwards compatible, and Stylus uses it in their code-base (or at least used to, I don't know if they still do) and afaik it hasn't given them any trouble.

Maybe set it as version 2, make it clear that it has potential to be be a breaking change? Or just publish it under @JobLeonard/lz-string?

@rquadling
Copy link
Collaborator

Releasing as a major version upgrade should be enough to let everyone know that there MAY be BCs. Semantic versioning and all that.

@rquadling
Copy link
Collaborator

If everyone is happy for this to be released then I can do that as v2.0.0

@rquadling
Copy link
Collaborator

With regard to the minification process .. this doesn't seem to be documented anywhere. Should we not be doing that as part of a release process? Normally accepting compressed files without ensuring they are based upon uncompressed ones is a bad idea.

@JobLeonard
Copy link
Collaborator Author

I think I just threw it through a minifier, but yeah, that probably should be standardized somehow

@rquadling
Copy link
Collaborator

There is a PR on here that relates to more pipeline work. Ideally someone with JS skills should come and look at it. I'm not that person. I work as part of the team that uses this library.

@pieroxy
Copy link
Owner

pieroxy commented May 16, 2022

Back in the days I threw the main JS file through uglifyjs by hand before doing the release. I agree it's not ideal and should be standardized.

@Rycochet
Copy link
Collaborator

I feel that the release process should be a different PR to this - also that this should be merged, then followed by an almost immediate conversion to Typescript in another PR (straight conversion, no code changes - so not updating #123) - using perhaps https://www.npmjs.com/package/microbundle (I used to use TSDX, but that's been abandoned for a couple of years) - then a further PR to add CI (Travis or Github actions?) test build and release :-)

@Kikobeats
Copy link

There are a lot of good work on this PR!

merge this and release a v2 should be enough to don't break anything 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.