Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

discussion: redo 1200s https etc #50

Closed
Thorin-Oakenpants opened this issue Mar 11, 2017 · 38 comments
Closed

discussion: redo 1200s https etc #50

Thorin-Oakenpants opened this issue Mar 11, 2017 · 38 comments

Comments

@Thorin-Oakenpants
Copy link
Contributor

Thorin-Oakenpants commented Mar 11, 2017

In case you were wondering about these redo section issues (there are more coming):

  • the user.js has been one person's vision of how to structure it, order it, word it, where to put items, with some feedback in a wordpress forum, with over two years of adding items, usually at the end although I did try to leave gaps, and I did do some sort of revamps over the releases
  • github affords us the chance to have really decent discussions (and arguments) in a proper collaborative environment, in order to whip this puppy into shape, and to overhaul all the inconsistencies, logic, wording, and more. The sky's the limit. Synergy and all that stuff.
  • I want to revamp these sections (we don't need to do all of them) before I do a 52 release, that way, after the 52 release, the majority of changes will be minimal
  • And after the revamps, Martin will do an article, and people who come and fork will then have less upheaval and commits etc.
  • Basically, this is two years worth of everyone making my mess better :)

Section 1200 needs some love. I know @earthlng has some ideas for this section. I definitely want ciphers grouped together in say a 1250s. This is all about the order, numbering, wording. Don't want to get into discussions on the merits of turning things on and off or technical discussions on cipher suites and how curves are better than squares xD.

@Atavic
Copy link

Atavic commented Mar 12, 2017

/* 1213: disable 3DES (effective key size < 128)
* https://en.wikipedia.org/wiki/3des#Security
* http://en.citizendium.org/wiki/Meet-in-the-middle_attack
* http://www-archive.mozilla.org/projects/security/pki/nss/ssl/fips-ssl-ciphersuites.html ***/

URLs aren't 3DES specific, I'd move them and use the 2nd for MITM introduction (/* 1204:) while the 3rd could go as general intro (/*** 1200:) where now there is
https://www.securityartwork.es/2017/02/02/tls-client-fingerprinting-with-bro/
that's really technical and maybe a let down as 1st link.

Drop wikipedia.

@earthlng
Copy link
Contributor

/* 1213: disable 3DES (effective key size < 128)
* https://en.wikipedia.org/wiki/3des#Security
* http://en.citizendium.org/wiki/Meet-in-the-middle_attack
* http://www-archive.mozilla.org/projects/security/pki/nss/ssl/fips-ssl-ciphersuites.html ***

URLs aren't 3DES specific, I'd move them and use the 2nd for MITM introduction (/* 1204:) while the 3rd could go as general intro (/*** 1200:) where now there is
https://www.securityartwork.es/2017/02/02/tls-client-fingerprinting-with-bro/
that's really technical and maybe a let down as 1st link.

Drop wikipedia.

IMO the wiki-link is the only link that illustrates why we disable 3des, with only a few sentences.
The 3rd link is IMO way more confusing and hard to understand, and we can drop that one.
And the second is Meet-in-the-middle_attack not Man-in-the-middle. That link should definitely stay, but we'll see where it fits best.
I agree the securityartwork.es article is quite technical once it gets to the "Enter Bro" part. But you can stop reading at that point anyway. But your point is taken - I'll create a wiki with quotes and the images from that article and we can link to our wiki instead. (once done, I'll ask the author of the article if he minds that we quote him and use his images for our wiki)

Thank you @Atavic

@earthlng
Copy link
Contributor

earthlng commented Mar 12, 2017

I'll create a wiki with quotes

I think we can just add a note to the link that the article is quite technical but the first part is easy to understand and you can stop reading at "Enter Bro" .
It's only the first like 10 sentences that matter and illustrate the problem.

@earthlng
Copy link
Contributor

outline for the re-ordering

I definitely agree with grouping the ciphers together. But for the rest - I'm also not exactly an expert - Idk which ones make sense to group together. One group I could see is mixed-content.
But it makes it more complicated when we never really know where something fits best. So I say we keep the rest un-grouped.

deciding on reversing those 2 modern ciphers (and thus that then mean we are at all the same values as a nilla FF re ciphers)

1261+1262+1263 are all still enabled by default, so those would all need to be commented out to achieve same values as a nilla FF re ciphers

@earthlng
Copy link
Contributor

Do we flick them all to commented out (except SHA-1, the default is 3 which is local certs, so no leakage there), and move the CA fingerprinting info down to the sub-header

sure, why not. The comic with the wrench comes to mind :)

I'm actually happy with Mozilla's defaults.

I'm not exactly happy with the defaults but I understand why mozilla does it.

PS: mixed content is already grouped

unless you changed something since your proposal - no.
all the security.mixed_content.* should be grouped together because 1220 is first and foremost a mixed-content "improvement" (hence the name branch)
That would leave 1252+1253+1254 as UI prefs - I'd be happy with that.

/* SSL (Secure Sockets Layer) / TLS (Transport Layer Security) ***/
/* OCSP (Online Certificate Status Protocol) ***/
/* CERTS / PINNING ***/
/* HSTS (HTTP Strict Transport Security) / HPKP (HTTP Public Key Pinning) ***/
/* MIXED CONTENT ***/
/* CIPHERS ***/
/* UI (User Interface) ***/

HSTS and HPKP (no pref for HPKP, correct?) could/should go together because they're also stored in the same file at the moment. (it would only have 1 pref in that section atm, right? - but regardless, that grouping makes sense IMO)
UI last because that's the least important

@earthlng
Copy link
Contributor

security.mixed_content.send_hsts_priming and security.mixed_content.use_hsts are grouped under HSTS

yes thats the problem. they are a mixed-content "improvement". They don't fit under HSTS, because when you don't block mixed content those 2 are irrelevant, see what I mean?

But it makes it more complicated when we never really know where something fits best. So I say we keep the rest un-grouped.

I don't see any problems with the following groups but the rest should maybe best remain un-sub-grouped, because

everything is interconnected

/* MIXED CONTENT ***/ -> for the 4 mixed-content prefs
/* CIPHERS ***/
/* UI (User Interface) ***/ -> 1252+1253+1254

I think it would be best if we do these kinds of discussions in a pull request. We can do 100 commits in there if necessary and as soon as we agreed on something merge that one commit into the PR, etc.
For example we both agree on the Ciphers sub-group.
When everything is done, just merge the agreed upon PR into the master and finito.

@Atavic
Copy link

Atavic commented Mar 13, 2017

no pref for HPKP, correct?

/*** 2698: FIRST PARTY ISOLATION (FPI) ***/ has only a resolved bugzilla reference about HPKP.

@fmarier
Copy link

fmarier commented Mar 13, 2017

/* 1242: disable HSTS Priming (FF51+) ***/
user_pref("security.mixed_content.send_hsts_priming", false);
user_pref("security.mixed_content.use_hsts", false);

What's the benefit in disabling this?

With use_hsts off, the browser enforces mixed content restrictions even though they wouldn't actually lead to mixed content (given the site is already known to be HSTS-enabled). With use_hsts on, the restrictions are removed in some of the cases where the browser already knows it's not going to be mixed content.

Priming is an extra check which will increase the number of sites that are discovered to be hsts-enabled. This seems like a clear security win, and is the reason why we are experimenting with this idea.

@earthlng
Copy link
Contributor

so as for the changes that @Atavic suggested. Do we want to include something like this for the first link:
the article is quite technical but the 1st part is easy to understand and you can stop reading at "Enter Bro"

We can remove the 3rd link in 1261, because it's very old, quite a mess, hard to understand, doesn't really add anything and half the page is (OBSOLETE). The 1st and 2nd link should stay under 3des however.

you also didn't want to blame mozilla for dragging their feet anymore but it's still there.

and move the CA fingerprinting info down to the sub-header

no. it also applies to other things afaik. fe. disabling session-tickets. Maybe @fmarier knows more about this.

@earthlng
Copy link
Contributor

earthlng commented Mar 13, 2017

What's the benefit in disabling HSTS Priming?

With use_hsts off, the browser enforces mixed content restrictions even though they wouldn't actually lead to mixed content

With all due respect but if that's really the case then that's a terrible implementation.
If a new feature is turned off it should have exactly zero effects IMHO.
And btw false is the default value for both prefs in FF52

This is why we decided to enforce HSTS Priming off.

edit: It's entirely possible that my assessment was completely off, but it sounded more like a help for websites with considerable downsides for end-users, not like a security feature.

@Atavic
Copy link

Atavic commented Mar 13, 2017

/*** 1200: HTTPS ( SSL/TLS / OCSP / CERTS / HSTS / HPKP / ENCRYPTION )
     Note that cipher suites can be used server side for fingerprinting, see:
     https://www.owasp.org/index.php/Testing_for_weak_Cryptography
     You can drop weak cryptographic primitives (security) or keep them
     at default (Mozilla keeps some old cipher suites until they are expired)
 ***/

...kept it short, but

https://www.owasp.org/index.php/Testing_for_weak_Cryptography

should really be:

https://www.owasp.org/index.php/Testing_for_Weak_SSL/TLS_Ciphers,_Insufficient_Transport_Layer_Protection_(OTG-CRYPST-001)

and

(Mozilla keeps some old cipher suites until they are expired)

should really be:

(Mozilla keeps some old cipher suites until the expiration date of the CAs containing them is reached)

@earthlng
Copy link
Contributor

earthlng commented Mar 13, 2017

@Atavic, wait, you thought the Spain (.es) link was too technical but now you link us to this 10x longer article with nmap outputs and shit like that? How is that not waaaaay worse?
Also IMO there are no weak ciphers in FF atm. Yes, some are better than others, but they aren't weak. It'll take me a while to read thru that link, but if you have already done so please point me to where they say that one or more of the current ciphers in FF are considered as weak

@earthlng
Copy link
Contributor

@crssi, please don't be mad at me but I'm deleting your comments and my reply, to keep this as clean and relevant as possible. I'll also delete this very comment again as soon as you give me a thumbs-up to let me know that you're not mad at me :)

@fmarier
Copy link

fmarier commented Mar 13, 2017

With all due respect but if that's really the case then that's a terrible implementation. If a new feature is turned off it should have exactly zero effects IMHO.

What I was describing is not a new behavior. Whether you use an old browser or a new version of Firefox with that setting disabled, it's the same thing: the mixed content blocker kicks in to block a load that would be done over HTTPS.

This is why we decided to turn HSTS Priming off.

The goal of the mixed content blocker (MCB) is to block HTTP requests from HTTPS pages. It's not there to reduce data usage or block trackers. That's the role of ublock, adblockplus, request policy, tracking protection, etc.

So what use_hsts does is reduce the number of false positives. It reduces the amount of unnecessary breakage that the MCB causes while still ensuring that no HTTP connections are made from HTTPS pages.

The pre-flight request it adds (to a host it would already need to connect to if it displayed the content) is over HTTPS, so again, it's not violating what it's supposed to do. It's merely trying to make the MCB more useful, which could help us make it more aggressive in the future (for example, blocking passive mixed content by default).

@fmarier
Copy link

fmarier commented Mar 13, 2017

you do realize that they are both false by default, right?

I primarily use Firefox Nightly where they are both enabled by default :D

@earthlng
Copy link
Contributor

earthlng commented Mar 13, 2017

the mixed content blocker kicks in to block a load that would be done over HTTPS.

What? Why does MCB block a load over HTTPS? The goal of MCB is to block HTTP requests, not HTTPS ones, as you said yourself in your next sentence. Please elaborate. Or just a typo?
If its a typo, I still don't understand when you said

the browser enforces mixed content restrictions even though they wouldn't actually lead to mixed content

so please elaborate either way.


It's not there to reduce data usage or block trackers

I never said that. I'd appreciate if you could quote from my assessment and tell me where I'm wrong.

it's not violating what it's supposed to do

I disagree. If I block all mixed-content, why would I want additional requests to detect if a HTTP-resource is also available over HTTPS? for convenience only, right? To perhaps unblock a picture here or there that would otherwise be blocked. And that's exactly what I said in my assessment.

Ok, maybe it's not violating what it's supposed to do, but it works around what it really is supposed to do. "It changes the order" to quote from the bugzilla and the patch author.
Yes, I know the requests are still done over HTTPS, but still, how is that not a convenience feature.
I see what you mean, that in a very broad sense it can be considered a security-feature because it helps you move more people to block mixed-content. But until you enable passive mixed-content blocking by default, HSTS priming works around MCB and is just not a security feature.

And IMO it'll take years before mozilla will dare to enable passive mixed-content blocking by default, but I also like surprises, so... keep at it ;)

It's merely trying to make the MCB more useful

Exactly. It's a convenience feature.

Mixed-content blocking may prevent some sites from moving from HTTP to HTTPS. In order to help sites opportunistically move to HTTPS, we introduce the concept of HSTS Priming.

I just really hope you keep the prefs available to disable HSTS priming.

@earthlng
Copy link
Contributor

we originally added it with the following note, and if anything is incorrect about it in your opinion, please elaborate

// RISKS: formerly blocked mixed-content may load, may cause noticeable delays eg requests
// time out, requests may not be handled well by servers, possible fingerprinting

@Atavic
Copy link

Atavic commented Mar 13, 2017

@earthlng The long OWASP URL explains a lot in a better way than the page about Bro.
And 'weak' was just a general term, as with new CPUs old cyphers are considered weak.
Call it 'problematic' if you want.

@fmarier
Copy link

fmarier commented Mar 13, 2017

the mixed content blocker kicks in to block a load that would be done over HTTPS.

What? Why does MCB block a load over HTTPS? The goal of MCB is to block HTTP requests, not HTTPS ones, as you said yourself in your next sentence. Please elaborate. Or just a typo?
If its a typo, I still don't understand when you said

Take this example:

  • https://example.com is an HTML page which loads http://example.net/example.js

That's active mixed content and it's blocked by the MCB because if you were to disable MCB, the browser would do an HTTP load from an HTTPS page and that's what the MCB is supposed to block.

Now, let's look at this different example:

  • https://example.com is an HTML page which loads http://example.net/example.js
  • http://example.net/* actually redirects to https://example.net/*
  • https://example.net/* sets an HSTS header

We still have something that looks like active mixed content and so the MCB kicks in and blocks the script load. However, if you were to turn off MCB, what you would get is https://example.com loading http**s**://example.net/example.js directly, not http://example.net/example.js.

Therefore, MCB blocks an HTTPS to HTTPS request just because it runs before HSTS. The idea of the use_hsts = true behavior is to check for HSTS before MCB to avoid unnecessary blocking. In a way, it's kind of a bug in the MCB specification and we are hoping to fix it there.

@earthlng
Copy link
Contributor

earthlng commented Mar 13, 2017

Ok, thanks, I understand what you mean now.
But with MCB enabled it would only work that way if I had previously visited https://example.net/ so that it could have set an entry in my HSTS cache, correct?
If so, how does that work when the HSTS cache is isolated thanks to the Tor Uplift project?

In a way, it's kind of a bug in the MCB specification and we are hoping to fix it there.

I wouldn't say it's a bug. MCB works exactly as it's supposed to work IMHO. ie see http -> discard

One could say you're encouraging lazy web-admins to never fix their sites, if you make things work when they really shouldn't. And you're weakening MCB to do so.
I know, you're the expert but this is just how I as an end-user see things.

And while I totally agree that HEAD requests are way better (because faster), they are also fairly rare and

stand out in most servers logfiles like a black guy at a KKK meeting

as I so eloquently put it :)
so it adds a few more bits for fingerprinting. Add the 24h cache to that, and I'm sure the TBB guys are not amused.

edit: if I had anything to say in the matter - please don't "fix" it in the MCB directly, and instead keep the HSTS Priming prefs available and disable-able

@fmarier
Copy link

fmarier commented Mar 13, 2017

But with MCB enabled it would only work that way if I had previously visited https://example.net/ so that it could have set an entry in my HSTS cache, correct?

In the absence of priming, use_hsts relies on:

  • a previous a visit to the https://example.net/ homepage (like you said),
  • a previous image load (not blocked by MCB since it's passive) from http://example.net, which got redirected by the server to https://example.net,
  • or the example.net domain being in the HSTS preload list.

If so, how does that work when the HSTS cache is isolated thanks to the Tor Uplift project?

Non-preloaded URLs are much less likely to be in the HSTS cache in that case because the HSTS cache is not shared across domains.

@earthlng
Copy link
Contributor

Thanks, final questions if you don't mind ...

Do you understand now why we disable it, and do my reasons make even the slightest of sense to you?

@earthlng
Copy link
Contributor

@fmarier one way to make MCB better and something I wouldn't mind seeing "fixed" directly in the MCB itself, would be to upgrade HTTP to HTTPS for resources on domains that already have a HSTS entry in the cache or the preload list. Perhaps only if the domain has the includeSubDomains directive which should guarantee that the resource is available over HTTPS. That way no additional requests would be necessary and MCB could redirect internally to the HTTPS resource. I just really don't like the additional "priming" requests. Since the includeSubDomains is mandatory for domains on the preload list, maybe you could start with that.

@earthlng
Copy link
Contributor

earthlng commented Mar 17, 2017

  • Do we need ENCRYPTION in the title?
  • ... see [1]. This article ...
see [1] (It's quite technical but the first part is easy to understand
and you can stop reading when you reach the second section titled "Enter Bro")
  • Maybe make the sub-titles standing out more visually? /** instead of /* ?
    => section titles with 3 *, subs with 2 and items with 1 star
  • 1203: I think with TLS1.3 and this, it guarantees perfect Forward-Secrecy, maybe worth a comment, idk
  • 1211+1212: if 1212 (.require) "leaks information about the sites you visit to the CA (cert authority)" wouldn't 1211 also? I think that sentence makes more sense under 1211. +the follow-up sentence too, I guess
  • 1242: We disable it because formerly blocked mixed-content may load
    -> that's the "benefit" not the risk. How about: ...because of the additional priming requests, may cause ...
  • 1221: we have a test-page for that, right? add a link?
  • 1223: and is used to always use HTTPS for the domains on that list
    => and used to always load those domains over HTTPS -> sexier and avoid the "is used to use"
  • 1241: leave it this way as too many sites break visually - i'm using it and am happy with the result. remove the "leave it this way" and word it more neutral and objective, fe. It breaks many sites visually.
  • 1272: i.e doesn't work for HSTS
    => i.e doesn't work for HSTS conflicts or discrepancies (or misconfiguration) - or something like that

@earthlng
Copy link
Contributor

earthlng commented Mar 17, 2017

  • 1203: I remember the talk about TLS1.3 at the CCCcon mentioned that the session ids/ticket still allow for cracking previously stored encrypted sessions or something along those lines. but yeah, we can ignore it I guess.
  • 1211+1212: based on the names one enables the use of it and the other requires the use of it. But if 1211 is disabled I'd assume 1212 is irrelevant. Hence the sentences should be under 1211.
    Will take a look at the code and see if that's the case.
  • 1242 is this ok? I see you want to keep the benefit, makes sense, but I'd still slightly change it:
    Allowing HSTS Priming may load formerly blocked mixed-content, but it also generates does so by sending additional priming requests ...
  • 1223: I swear u wrote that when we talked - that may well be true but at that point I didn't care about wording it nicely.
  • 1272: you wrote that man - again, at that point it was good enough, but re-reading it now, I felt it could use some improving, or it could be misunderstood as meaning it doesn't work when HSTS is enabled or something. i.e doesn't work for HSTS discrepancies seems fine imo. I just didn't know if discrepancies is the right word to use here, so I listed some other ones to choose from.

@earthlng
Copy link
Contributor

Should probably remove the comment

No, I'd leave it

@fmarier
Copy link

fmarier commented Mar 20, 2017

Just waiting for Francois to maybe elaborate re 1211+1212.

I'm certainly not an expert on TLS, but in my experience, OCSP (the non-stapled kind) can be quite annoying when OCSP servers are down or when they are blocked by a captive portal. So I personally just use the defaults there (enabled = 1 but required = false).

@fmarier
Copy link

fmarier commented Mar 20, 2017

something I wouldn't mind seeing "fixed" directly in the MCB itself, would be to upgrade HTTP to HTTPS for resources on domains that already have a HSTS entry in the cache or the preload list. [...] That way no additional requests would be necessary and MCB could redirect internally to the HTTPS resource.

As I understand it, this is what use_hsts does. It uses the existing HSTS information to automatically upgrade what would otherwise be blocked, without generating extra requests. send_hsts_priming is what sends the extra requests to try and discover new HSTS domains. It sounds like you'd be happy with use_hsts = true and send_hsts_priming = false?

I just really don't like the additional "priming" requests.

That's the part I don't understand. I'm not sure why these requests are seen as undesirable.

@earthlng
Copy link
Contributor

earthlng commented Mar 20, 2017

@fmarier, thanks for your reply.

It sounds like you'd be happy with use_hsts = true and send_hsts_priming = false?

Yes absolutely - I somehow assumed they only work in tandem.
But I originally tested this in an older FF version and it didn't quite work that way.

I've setup a testpage here

In FF51 with use_hsts=true + send_hsts_priming=false it never loads the last picture from the domain on the preload-list. FF51 seems to only look in the cache file and ignore the preload list.

The default settings in FF51 were pretty stupid. It sent priming requests but never used them.

send_hsts_priming is what sends the extra requests to try and discover new HSTS domains.

Ok that's new to me. I thought it only sends the priming request when there's already a HSTS entry.
So it's even worse then.

That's the part I don't understand. I'm not sure why these requests are seen as undesirable.

I mostly don't like it because of what I expect MCB to do. It should block HTTP resources and not try to help me and send countless priming requests to all kinds of domains, regardless if they even support HTTPS.

The new feature is a nice addition but you got me slightly worried when you said you would like to "fix" it in the MCB directly - meaning there wouldn't be a way to disable the priming requests.

Another thing I absolutely don't like is that the Network DevTools never shows the request for the 1st picture when send_hsts_priming=true, but that's for another department to fix I guess.
But can you gently kick their asses for me, please ;)
(IMHO the network tab should show EVERY request, always!)

@earthlng
Copy link
Contributor

earthlng commented Mar 20, 2017

@Thorin-Oakenpants: based on my previous comment we should change the use_hsts to true.
It can still be used for fingerprinting but blocking mixed-content does already allow for that anyway.
There's no other downside with using this IMO.

@fmarier
Copy link

fmarier commented Mar 20, 2017

Another thing I absolutely don't like is that the Network DevTools never shows the request for the 1st picture when send_hsts_priming=true.

That sounds like a bug. I would encourage you to file it :)

@earthlng
Copy link
Contributor

earthlng commented Mar 20, 2017

That sounds like a bug. I would encourage you to file it :)

Lol, I'm still waiting for my last bug-report to get some love, but yeah I'll test if it's already fixed in nightly and create a ticket if necessary.

edit: reported it. Thanks for your encouragement :)

@earthlng
Copy link
Contributor

earthlng commented Mar 21, 2017

No, activate both prefs and set use_hsts=true.
use_hsts is like the master switch and send_hsts_priming another feature on top of that to discover new HSTS supporting sites for otherwise blocked requests.
It's not really a master switch because you can use send_hsts_priming=true + use_hsts=false but it's pretty stupid because it sends the priming requests but never uses them.

/* 1242: allow Mixed-Content-Blocker to use the HSTS cache but disable the HSTS Priming requests (FF51+)
 * Allow resources from domains with an existing HSTS cache record or in the HSTS preload list to be
 * upgraded to HTTPS internally but disable sending out HSTS Priming requests, because those may cause
 * noticeable delays eg requests time out or are not handled well by servers, and there are possible
 * fingerprinting issues
 * [NOTE] if you want to use the priming requests make sure 'use_hsts' is true also
 * [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1246540#c145 ***/
user_pref("security.mixed_content.use_hsts", true);
user_pref("security.mixed_content.send_hsts_priming", false);

or we could split them up as 1242 + 1242b, idk, what do you think?

@fmarier
Copy link

fmarier commented Mar 21, 2017

because those may cause noticeable delays eg requests time out or are not handled well by servers

If you witness any of these things, a bug should be filed. The priming requests are meant to be unnoticeable so if they cause any issues, that's clearly a bug in the implementation.

and there are possible fingerprinting issues

Can you explain who would be doing the fingerprinting and how they would do it? I don't see it.

@Atavic
Copy link

Atavic commented Mar 23, 2017

user_pref("security.mixed_content.use_hsts", true);
user_pref("security.mixed_content.send_hsts_priming", false);

Seems like an half used feature, either use full hsts options or change both entries to false.

@crssi
Copy link

crssi commented Mar 23, 2017

@earthlng
Copy link
Contributor

earthlng commented Mar 24, 2017

@fmarier, sorry for the late response

because those may cause noticeable delays eg requests time out or are not handled well by servers

If you witness any of these things, a bug should be filed. The priming requests are meant to be unnoticeable so if they cause any issues, that's clearly a bug in the implementation.

You already had to decrease the timeout from I think 10sec to 3sec. Delays are not necessarily an implementation bug. fe. sites that use lots of images from different very slow servers. Also if sites do support HTTPS but not HSTS (fe. the ghacks image in my testpage) there's the overhead to establish the secure connection (right? - or is that not done with HEAD requests?). Your own test-server didn't handle HEAD requests all to well. Firewalls could drop requests to 443 resulting in the full 3sec timeout. I could probably go on but I think you see my point. Unless mozilla plans to sponsor faster servers in some of those cases there's really nothing more you can do to improve the situation.

and there are possible fingerprinting issues

Can you explain who would be doing the fingerprinting and how they would do it? I don't see it.

Users A don't block anything and request all resources (X) from a server
Users B block mixed-content (passive and/or active) and request resources X-Y
Users C also send some HEAD requests.

Add the 24h cache to that and the server could even know if you closed FF in the meantime and/or deleted siteSettings.

Clearly those groups of users are easily discernible wouldn't you say? Isn't that what's considered one or more bits of fingerprints?

This could probably all be done from a single domain with a number of subdomains - so "who would be doing the fingerprinting" - anyone who wants to.

You're directly at the source. You could ask Arthur or whoever would be more knowledgeable about those things. If you do that please let me know what they told you.

edit: let me add a big fat AFAIK and IMO here because that's all it is

@earthlng
Copy link
Contributor

@Atavic

Seems like an half used feature, either use full hsts options or change both entries to false.

We block most or all unsolicited requests with this user.js so our current settings are perfectly fine. You can set both prefs to either true or false in your own user.js

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

5 participants