-
-
Notifications
You must be signed in to change notification settings - Fork 905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SSL cert failure on buckets with a dot (.) #437
Comments
Please try with github.com/mdomsch/s3cmd Assuming it still fails, please try with --ca-certs=/dev/null and see if Thanks, On Sat, Dec 13, 2014 at 6:21 AM, Yardena Cohen [email protected]
|
I installed a VM with arch linux and am trying. I see the failure, and it On Sat, Dec 13, 2014 at 7:45 AM, Matt Domsch [email protected] wrote:
|
I merged my bug/426 branch into upstream master now. I have one more patch which I think fixes your problem, at Thanks, On Sat, Dec 13, 2014 at 3:26 PM, Matt Domsch [email protected] wrote:
|
http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html Ugh. So, maybe we have to also disable hostname checking if a bucket contains a '.' and we are using SSL. Automatically... |
This is merged into the upstream master branch now. Please pull and try with that. |
I made this to test: https://aur.archlinux.org/packages/s3cmd-git It seems to work. Thanks! |
Any chance you'll release a new version with this fixed? |
Releases are few and far between right now. I recommend using master branch
|
Requiring hand installation is seriously limiting your userbase. It's a lot easier to convince downstreams to release a new packaged version if there is a new upstream release. What prevents making a release? Certainly not known bugs - every release of every application ever has shipped with known bugs :) |
The big thing right now, since signature v4 is in (so Frankfurt works), is
|
I'll stop telling you your business once I click "Comment", but neither of those sounds like a showstopper to a release (in fact, at this point it's virtually impossible to install Python-2.4 on a system, so IMHO that's not even a worthwhile project). |
I'd like to echo the comments from @djmitche about how a release would be much appreciated please. If there have been breaking changes on master, is there any possibility of a point release of 1.0.x please? It'd be significantly helpful and appreciated. Thanks! |
Ah, I've just seen that the current fix is conditionally disabling SSL hostname checking (i.e. effectively reverting to earlier Python behaviour) - I can totally see why you wouldn't want to backport that |
The problem is, there isn't a better option. The wildcard cert that AWS S3 Another option would be to somehow look up the appropriate S3 endpoint On Mon, Jan 5, 2015 at 3:23 AM, Kristian Glass [email protected]
|
Could we invent a syntax for users to supply that form? Then, rather |
To be clear, we're not disabling SSL, we're disabling hostname validation On Mon, Jan 5, 2015 at 8:11 AM, Dustin J. Mitchell <[email protected]
|
Sorry, yes, I misstated that. |
Thanks @yardenac for packaging that up. |
Hi: If I understand correctly, someone who has a cert for evil.com will be able to take over any connection your product makes to s3 buckets. I think this is a major security flaw. The "right fix" would be to special-case the handling of only the relevant s3 domain names. |
This is in fact exactly what it does. The hostname being connected to must be in s3.amazonaws.com (the default). So the attacker has to both compromise the target's DNS lookup method, and be able to respond as if they were s3.amazonaws.com. If the python SSL hostname validation code would treat bucket.example.com.s3.amazonaws.com as matching *.s3.amazonaws.com, then we would not need to disable this test. However, that would then violate RFC 6125 6.4.3 point 2: But this is exactly the format that Amazon expects the tools to use, and they like bucket names to be formatted like domain names (it helps provide universal uniqueness of names, and makes their CNAME resolution trivial when serving a bucket as a website). |
No, it disables hostname verification entirely when connecting to S3. This means anyone else can pretend to be S3, if I understand what the code is doing correctly, and offer any cert they have lying around. You should check the presented cert is in fact a wildcard cert with the offending domain name. |
By the time the connection returns to me, the check has already been On Wed, Jan 14, 2015 at 11:12 PM, Watson Ladd [email protected]
|
Yeah, I think we were just talking past each other about what's going on. It seems that the ssl library in python will parse and return the cert the other side provided, so implementing a special case for S3 hostnames doesn't look that bad. https://docs.python.org/2/library/ssl.html#ssl.SSLSocket.getpeercert seems to be the key function involved. |
urllib3 would allow you to validate the certificate against a different hostname. (as soon as urllib3/urllib3#526 is fixed) |
#458 has what I believe is a sane fix Thanks, On Thu, Jan 15, 2015 at 1:53 AM, Thomas Weißschuh [email protected]
|
Hi, recently I updated python to 2.7.5 and now I am experiencing this issue (using s3cmd release 1.6.0). Is there something I can do to rectify this? Any help would be greatly appreciated! |
Not sure, @timcreatewell, but I haven't had problems since I started using a ~/.s3cfg like below, with check_ssl_certificate=False. Also try setting host_bucket the same as host_base -- I think this forces s3cmd to use "bucket in path" rather than "bucket in Host:" style access.
|
Thanks @jamshid , will give that a go. |
Try master branch please. There was a fix a few weeks ago needed due to a On Wed, Jan 6, 2016 at 7:21 PM, Tim Cromwell [email protected]
|
Thanks @mdomsch ! |
With
|
Since updating Arch Linux, s3cmd fails to connect to any bucket with a dot (
.
) in its name:This is because python 2.7.9 validates SSL certs by default. It exposes a general problem with Amazon's wildcard cert. Note the certificate failure by visiting anything like this in your browser: https://buck.et.s3.amazonaws.com/
The solution may be to access things using this endpoint instead: https://s3.amazonaws.com/buck.et/
The text was updated successfully, but these errors were encountered: