-
Notifications
You must be signed in to change notification settings - Fork 519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
discussion: redo 1200s https etc #50
Comments
URLs aren't 3DES specific, I'd move them and use the 2nd for MITM introduction (/* 1204:) while the 3rd could go as general intro (/*** 1200:) where now there is Drop wikipedia. |
IMO the wiki-link is the only link that illustrates why we disable 3des, with only a few sentences. Thank you @Atavic |
I think we can just add a note to the link that |
I definitely agree with grouping the ciphers together. But for the rest - I'm also not exactly an expert - Idk which ones make sense to group together. One group I could see is mixed-content.
1261+1262+1263 are all still enabled by default, so those would all need to be commented out to achieve |
sure, why not. The comic with the wrench comes to mind :)
I'm not exactly happy with the defaults but I understand why mozilla does it.
unless you changed something since your proposal - no.
HSTS and HPKP (no pref for HPKP, correct?) could/should go together because they're also stored in the same file at the moment. (it would only have 1 pref in that section atm, right? - but regardless, that grouping makes sense IMO) |
yes thats the problem. they are a mixed-content "improvement". They don't fit under HSTS, because when you don't block mixed content those 2 are irrelevant, see what I mean?
I don't see any problems with the following groups but the rest should maybe best remain un-sub-grouped, because
I think it would be best if we do these kinds of discussions in a pull request. We can do 100 commits in there if necessary and as soon as we agreed on something merge that one commit into the PR, etc. |
/*** 2698: FIRST PARTY ISOLATION (FPI) ***/ has only a resolved bugzilla reference about HPKP. |
What's the benefit in disabling this? With Priming is an extra check which will increase the number of sites that are discovered to be hsts-enabled. This seems like a clear security win, and is the reason why we are experimenting with this idea. |
so as for the changes that @Atavic suggested. Do we want to include something like this for the first link: We can remove the 3rd link in 1261, because it's very old, quite a mess, hard to understand, doesn't really add anything and half the page is (OBSOLETE). The 1st and 2nd link should stay under 3des however. you also didn't want to blame mozilla for dragging their feet anymore but it's still there.
no. it also applies to other things afaik. fe. disabling session-tickets. Maybe @fmarier knows more about this. |
With all due respect but if that's really the case then that's a terrible implementation. This is why we decided to enforce HSTS Priming off. edit: It's entirely possible that my assessment was completely off, but it sounded more like a help for websites with considerable downsides for end-users, not like a security feature. |
...kept it short, but https://www.owasp.org/index.php/Testing_for_weak_Cryptography should really be: and (Mozilla keeps some old cipher suites until they are expired) should really be: (Mozilla keeps some old cipher suites until the expiration date of the CAs containing them is reached) |
@Atavic, wait, you thought the Spain (.es) link was too technical but now you link us to this 10x longer article with nmap outputs and shit like that? How is that not waaaaay worse? |
@crssi, please don't be mad at me but I'm deleting your comments and my reply, to keep this as clean and relevant as possible. I'll also delete this very comment again as soon as you give me a thumbs-up to let me know that you're not mad at me :) |
What I was describing is not a new behavior. Whether you use an old browser or a new version of Firefox with that setting disabled, it's the same thing: the mixed content blocker kicks in to block a load that would be done over HTTPS.
The goal of the mixed content blocker (MCB) is to block HTTP requests from HTTPS pages. It's not there to reduce data usage or block trackers. That's the role of ublock, adblockplus, request policy, tracking protection, etc. So what The pre-flight request it adds (to a host it would already need to connect to if it displayed the content) is over HTTPS, so again, it's not violating what it's supposed to do. It's merely trying to make the MCB more useful, which could help us make it more aggressive in the future (for example, blocking passive mixed content by default). |
I primarily use Firefox Nightly where they are both enabled by default :D |
What? Why does MCB block a load over HTTPS? The goal of MCB is to block HTTP requests, not HTTPS ones, as you said yourself in your next sentence. Please elaborate. Or just a typo?
so please elaborate either way.
I never said that. I'd appreciate if you could quote from my assessment and tell me where I'm wrong.
I disagree. If I block all mixed-content, why would I want additional requests to detect if a HTTP-resource is also available over HTTPS? for convenience only, right? To perhaps unblock a picture here or there that would otherwise be blocked. And that's exactly what I said in my assessment. Ok, maybe it's not violating what it's supposed to do, but it works around what it really is supposed to do. "It changes the order" to quote from the bugzilla and the patch author. And IMO it'll take years before mozilla will dare to enable passive mixed-content blocking by default, but I also like surprises, so... keep at it ;)
Exactly. It's a convenience feature.
I just really hope you keep the prefs available to disable HSTS priming. |
we originally added it with the following note, and if anything is incorrect about it in your opinion, please elaborate
|
@earthlng The long OWASP URL explains a lot in a better way than the page about Bro. |
Take this example:
That's active mixed content and it's blocked by the MCB because if you were to disable MCB, the browser would do an HTTP load from an HTTPS page and that's what the MCB is supposed to block. Now, let's look at this different example:
We still have something that looks like active mixed content and so the MCB kicks in and blocks the script load. However, if you were to turn off MCB, what you would get is Therefore, MCB blocks an HTTPS to HTTPS request just because it runs before HSTS. The idea of the |
Ok, thanks, I understand what you mean now.
I wouldn't say it's a bug. MCB works exactly as it's supposed to work IMHO. ie see http -> discard One could say you're encouraging lazy web-admins to never fix their sites, if you make things work when they really shouldn't. And you're weakening MCB to do so. And while I totally agree that HEAD requests are way better (because faster), they are also fairly rare and
as I so eloquently put it :) edit: if I had anything to say in the matter - please don't "fix" it in the MCB directly, and instead keep the HSTS Priming prefs available and disable-able |
In the absence of priming,
Non-preloaded URLs are much less likely to be in the HSTS cache in that case because the HSTS cache is not shared across domains. |
Thanks, final questions if you don't mind ... Do you understand now why we disable it, and do my reasons make even the slightest of sense to you? |
@fmarier one way to make MCB better and something I wouldn't mind seeing "fixed" directly in the MCB itself, would be to upgrade HTTP to HTTPS for resources on domains that already have a HSTS entry in the cache or the preload list. Perhaps only if the domain has the |
|
|
No, I'd leave it |
I'm certainly not an expert on TLS, but in my experience, OCSP (the non-stapled kind) can be quite annoying when OCSP servers are down or when they are blocked by a captive portal. So I personally just use the defaults there ( |
As I understand it, this is what
That's the part I don't understand. I'm not sure why these requests are seen as undesirable. |
@fmarier, thanks for your reply.
Yes absolutely - I somehow assumed they only work in tandem. I've setup a testpage here In FF51 with The default settings in FF51 were pretty stupid. It sent priming requests but never used them.
Ok that's new to me. I thought it only sends the priming request when there's already a HSTS entry.
I mostly don't like it because of what I expect MCB to do. It should block HTTP resources and not try to help me and send countless priming requests to all kinds of domains, regardless if they even support HTTPS. The new feature is a nice addition but you got me slightly worried when you said you would like to "fix" it in the MCB directly - meaning there wouldn't be a way to disable the priming requests. Another thing I absolutely don't like is that the Network DevTools never shows the request for the 1st picture when |
@Thorin-Oakenpants: based on my previous comment we should change the |
That sounds like a bug. I would encourage you to file it :) |
Lol, I'm still waiting for my last bug-report to get some love, but yeah I'll test if it's already fixed in nightly and create a ticket if necessary. edit: reported it. Thanks for your encouragement :) |
No, activate both prefs and set /* 1242: allow Mixed-Content-Blocker to use the HSTS cache but disable the HSTS Priming requests (FF51+)
* Allow resources from domains with an existing HSTS cache record or in the HSTS preload list to be
* upgraded to HTTPS internally but disable sending out HSTS Priming requests, because those may cause
* noticeable delays eg requests time out or are not handled well by servers, and there are possible
* fingerprinting issues
* [NOTE] if you want to use the priming requests make sure 'use_hsts' is true also
* [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1246540#c145 ***/
user_pref("security.mixed_content.use_hsts", true);
user_pref("security.mixed_content.send_hsts_priming", false); or we could split them up as 1242 + 1242b, idk, what do you think? |
If you witness any of these things, a bug should be filed. The priming requests are meant to be unnoticeable so if they cause any issues, that's clearly a bug in the implementation.
Can you explain who would be doing the fingerprinting and how they would do it? I don't see it. |
Seems like an half used feature, either use full hsts options or change both entries to false. |
@fmarier, sorry for the late response
You already had to decrease the timeout from I think 10sec to 3sec. Delays are not necessarily an implementation bug. fe. sites that use lots of images from different very slow servers. Also if sites do support HTTPS but not HSTS (fe. the ghacks image in my testpage) there's the overhead to establish the secure connection (right? - or is that not done with HEAD requests?). Your own test-server didn't handle HEAD requests all to well. Firewalls could drop requests to 443 resulting in the full 3sec timeout. I could probably go on but I think you see my point. Unless mozilla plans to sponsor faster servers in some of those cases there's really nothing more you can do to improve the situation.
Users A don't block anything and request all resources (X) from a server Add the 24h cache to that and the server could even know if you closed FF in the meantime and/or deleted siteSettings. Clearly those groups of users are easily discernible wouldn't you say? Isn't that what's considered one or more bits of fingerprints? This could probably all be done from a single domain with a number of subdomains - so "who would be doing the fingerprinting" - anyone who wants to. You're directly at the source. You could ask Arthur or whoever would be more knowledgeable about those things. If you do that please let me know what they told you. edit: let me add a big fat AFAIK and IMO here because that's all it is |
We block most or all unsolicited requests with this user.js so our current settings are perfectly fine. You can set both prefs to either true or false in your own user.js |
In case you were wondering about these redo section issues (there are more coming):
Section 1200 needs some love. I know @earthlng has some ideas for this section. I definitely want ciphers grouped together in say a 1250s. This is all about the order, numbering, wording. Don't want to get into discussions on the merits of turning things on and off or technical discussions on cipher suites and how curves are better than squares xD.
The text was updated successfully, but these errors were encountered: