-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update page about setting up a private Stratum 1 #157
Conversation
PR for changing the Stratum 0 url to the sync server: EESSI/filesystem-layer#177. |
This should be more or less ready now, except that one step of our |
docs/filesystem_layer/stratum1.md
Outdated
|
||
The EESSI project provides a number of geographically distributed public Stratum 1 servers that you can use to make EESSI available on your machine(s). | ||
If you want to be better protected against network outages and increase the bandwidth between your cluster nodes and the Stratum 1 servers, | ||
you could consider setting up a local (private) Stratum 1 server that replicates the EESSI CVMFS repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should say one or multiple (private) Stratum 1 (s)? I don't remember what our recommendation was in the best practice training, I think it was about 1 per 500 clients or so. Or was the recommendation: one stratum 1 + one proxy per 500 clients?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, maybe add that after this sentence, or even at the end of the paragraph: "For large systems, consider setting up multiple Stratum 1 servers. Approximately one stratum 1 per 500 clients is recommended."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was about proxies, indeed. I could still say multiple here, but the advantage of having multiple is really minimal I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could have a call out at the end to mention these kinds of points, doesn't need to be here already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may even want to know how to "upgrade" your own S1 to an S0 so you can sync within your network
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it is much benefit to have multiple private stratum 1 servers, but multiple proxies is a good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a4851b0 adds a sentence with a recommendation to at least have local proxies (bit outside the scope of this page, but probably good to mention that here as well). Left the recommendation for a stratum 1 unchanged, as I don't see much value in having more than one either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, good point by @rptaylor that multiple proxies is sufficient - the only reason to have a stratum 1 is to be resilient against network outage. For that purpose, one is enough.
docs/filesystem_layer/stratum1.md
Outdated
In all cases this was due to an intrusion prevention system scanning the associated network, and hence scanning all files going in or out of the Stratum 1. | ||
Though it was a false-positive in all cases, this breaks the synchronization procedure of your Stratum 1. | ||
If this is the case, you can try switching to HTTPS by using `https://aws-eu-west-s1-sync.eessi.science` for synchronizing your Stratum 1. | ||
Even though there is no advantage for CVMFS itself in using HTTPS (it has built-in mechasnims for ensuring the integrity of the data), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why aren't we always recommending running with HTTPS? Is there a downside? I'd say there might be a small speed penalty due to the encryption / decription. Maybe mention that this is why plain HTTP is the default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's indeed the main downside, as far as I know. Added a sentence about it in f830adb.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not only is there no advantage - it should be noted this is a disadvantage because it makes caching in forward proxies impossible (unless, hypothetically, you distribute the private TLS keys of the stratum servers to all the squids so they can do the TLS termination). I would not recommend it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is also a typo: mechasnims
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
docs/filesystem_layer/stratum1.md
Outdated
|
||
You can put your license key in the local configuration file `inventory/local_site_specific_vars.yml`. | ||
!!! note Squid reverse proxy | ||
The Stratum 1 playbooks also installs and configures a Squid reverse proxy on the server. The template configuration file for Squid can be found at `templates/eessi_stratum1_squid.conf.j2`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, is that needed? What's the point of running a Squid next to a Stratum 1 on the same machine? The clients might as well directly connect to the Stratum 1 then, no? (probably my limited knowledge, but might be something to explain here as well)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Caching of data is actually not needed, since the data is already on the same disk. I think the main usecase of this Squid is then to cache GEO API lookups, but since we recommend to disable that on private stratum 1s, this doesn't make sense. Let me check if I can easily introduce an option for this in the playbook, so that it won't set up Squid by default, unless specifically requested (which we can then do in our playbooks for the public servers).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually it does do some in-memory caching as well:
cache_mem 128 MB
# CERN config examples use 128 KB for both local proxies and stratum 1, but
# data objects are larger than this
maximum_object_size_in_memory 4 MB
That could perhaps still be somewhat beneficial...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @casparvl . You could just leave the memory to the OS to help cache data for httpd. IIRC Dave recommends using a reverse proxy only for some monitoring capability that OSG uses. A priori I wouldn't expect much performance benefit - if anything it could introduce a small latency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the new version of our playbooks the Squid installation has been made optional, and it's disabled by default. So, to avoid any confusion, I've removed this part in 7c6742c, as I don't think it should be part of this documentation.
In order to fix this (and some other issues), I've forked the repo and made the required changes: Now we just need to merge EESSI/filesystem-layer#179 to resolve the issue. |
…efault in the playbook
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed, we should probably add the client-config part to https://www.eessi.io/docs/getting_access/native_installation/ (i.e. just setting the CVMFS_SERVER_URLS
), under a seperate header for HPC systems / larger deployments. That can then link to this page on how to set up the Stratum 1.
We might also need a page on how to set up a proxy, but we can do that in a seperate PR.
@@ -69,7 +69,49 @@ The good news is that all of this only requires a handful commands :astonished: | |||
This is good enough for an individual client, or for testing purposes, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would:
- Remove this whole note.
- Create a subtile right below "Native installation" with
## Installation for single clients
- Then, at the end of the current docs, make a new subtitle
## Installation for HPC (or otherwise large) systems
. Under that header, first explain that such systems are adviced to use (multiple) Squid(s) for load balancing (~1 per 500 nodes) and a full Stratum 1 for redundancy. Say that the docs here will explain how to configure the client for that, and refer to the other pages (e.g. ../filesystem_layer/stratum1.md) to learn how to setup a Stratum 1 or Squid. - Then, all of your current level 2 headers can be level 3 headers under this
## Installation for HPC...
section
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in 9e97d4f.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've followed these docs myself to setup a private stratum 1 in a VM, and a client in another VM that connected to it. This was succesfull, docs are clear to me :)
Still sort of WIP, as I need to test some commands, and need to update the filesystem-layer repo (it should use the sync server as stratum 0 there).