Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Setup extended Complement Testing #14202

Closed
wants to merge 19 commits into from

Conversation

realtyem
Copy link
Contributor

@realtyem realtyem commented Oct 17, 2022

Pull Request Checklist

  • Pull request is based on the develop branch
  • Pull request includes a changelog file. The entry should:
    • Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from EventStore to EventWorkerStore.".
    • Use markdown where necessary, mostly for code blocks.
    • End with either a period (.) or an exclamation mark (!).
    • Start with a capital letter.
    • Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
  • Pull request includes a sign off
  • Code style is correct
    (run the linters)

...for more varied testing of workers by type.

In order to fully make use of testing workers in the future, pre-setup the workflow for testing:

  • single workers at a time
  • sharded workers in sets of 3
  • all stream writers as a collective group
  • full blown everything setup with a complete line up

That last one have to be careful with. Postgres is required for a worker setup at this time, and by default it has a connection limit of 100. Based on prior reading in closed(I think) issues, it appears that number of workers times config.database.args.cp_max cannot exceed that 100. This limit can be adjusted into the database setup to allow more connections at the time the complement docker image is built. The default of 10 for cp_max means at most 9 workers(have to include master) for a grand total of 10, and in practice you'll want to fudge an extra 10% for qos. So at max, 8 workers realistically. The default set of test workers is 14 and since the database doesn't get a stress test, it probably only uses config.database.args.cp_min(which defaults to 5) for a workers*cp_min of 20 workers(including master). So, theoretically, 19 extra workers before things start acting wonking/janking and twisted starts to barf on the floor.

Other limitations to consider, Github CI gives each runner 2 vCpus and 7Gb of RAM. My homeserver can use 14Gb of ram without blinking and I use 30 workers and have a global cache factor of 40, and I only have 2 users so that is just at startup and idle. When I tried to test my lineup of workers, Github caught fire(not really, but the test failed/timed-out with no logs).

Keeping in mind, there are tests that override all this(like here and here), for the most part this will work as expected

An example of what this looks like from my repo: https://github.com/realtyem/synapse-unraid/actions/runs/3262641493
Signed-off-by: Jason Little [email protected]

@realtyem
Copy link
Contributor Author

Would you like something like #13981 added too? For dependencies and such?

@realtyem realtyem closed this Nov 1, 2022
@realtyem realtyem deleted the setup-extended-complement branch November 8, 2022 23:14
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant