Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates will overwrite each other #2

Open
chasers opened this issue Feb 6, 2023 · 4 comments
Open

Updates will overwrite each other #2

chasers opened this issue Feb 6, 2023 · 4 comments

Comments

@chasers
Copy link

chasers commented Feb 6, 2023

With updates happening like this doc updates will overwrite each other.

That table should be a log of updates and on connect get all the updates and merge them and go from there.

This will be a good start, but there should be another table which holds compacted updates, so really on connect should get the compacted doc, and all the latest updates from the log table, and then apply them. Would be even better if we could make a database view which did this (I'll see).

I can try work on some SQL to get peoples db setup, and maybe a way to do the compaction server-side.

@AlexDunmow
Copy link
Owner

That is on my list to attempt next. I've been wondering the right way to do it without being too opinionated, but I do want it to be as plug'n'play as possible.

I'm also not sure if using this would be ideal: https://supabase.com/blog/postgres-crdt

@sigma-andex
Copy link

sigma-andex commented Mar 21, 2023

Hi,

I have forked your project and implemented the diff based approach, you can find it here. My approach was to have a table diff where I store all the diffs related to an entity. Then there are basically two approaches to load the data: 1. load all the rows in the frontend 2. create a view that aggregates (json_agg) all diffs per entity. I went with the second approach, because the first one has the problem that it requires pagination (current max row limit is 1000) and error handling, which makes this approach more error-prone. The second approach requires the view, but imho this the better trade-off. My table and view then look something like this:

CREATE TABLE "public"."diffs" (
    "id" bigserial primary key,
    "my_foreign_id" bigint,
    "diff" jsonb not null,
    "updated_at" timestamp with time zone DEFAULT "timezone"('utc'::"text", "now"()),
    "inserted_at" timestamp with time zone DEFAULT "timezone"('utc'::"text", "now"()) NOT NULL,
    CONSTRAINT "my_foreign_id_fkey" FOREIGN KEY ("my_foreign_id") REFERENCES "public"."my_foreign_table"("id") ON DELETE CASCADE
);

create view "public"."aggregated_diffs" with (security_invoker = true )as (
    select diffs.my_foreign_id,
        json_agg(diffs.diff) as diffs,
        count(diffs.diff) as num_diffs
    from (
            select *
            from diffs
            order by id desc
        ) diffs
    group by diffs.my_foreign_id
);

Additionally I have added a debouncer, so that document updates don't happen on every document change, as well as the V2 methods for (re)storing (appearantly they are more compact 🤷🏼‍♂️).

If you like that approach I'm happy to clean it up and create a PR.

PD: Thanks for the great work!

@kyeshmz
Copy link

kyeshmz commented Dec 22, 2023

@sigma-andex because this repo is unmaintained, is it possible to make yours an avaliable fork on npm?

@sigma-andex
Copy link

yes I can do it. give me a couple of days though

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants