Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SSL cert failure on buckets with a dot (.) #437

Closed
yardenac opened this issue Dec 13, 2014 · 31 comments · Fixed by #438
Closed

SSL cert failure on buckets with a dot (.) #437

yardenac opened this issue Dec 13, 2014 · 31 comments · Fixed by #438

Comments

@yardenac
Copy link

Since updating Arch Linux, s3cmd fails to connect to any bucket with a dot (.) in its name:

$ s3cmd info s3://buck.et
WARNING: Retrying failed request: /?location (hostname 'buck.et.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?location (hostname 'buck.et.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com')
WARNING: Waiting 6 sec...

This is because python 2.7.9 validates SSL certs by default. It exposes a general problem with Amazon's wildcard cert. Note the certificate failure by visiting anything like this in your browser: https://buck.et.s3.amazonaws.com/

The solution may be to access things using this endpoint instead: https://s3.amazonaws.com/buck.et/

@mdomsch
Copy link
Contributor

mdomsch commented Dec 13, 2014

Please try with github.com/mdomsch/s3cmd
branch bug/426-specify-SSL-CA-file. This is where we've started handling
the python 2.7.9 behavior change. I expect it'll be have similarly, but
would appreciate the confirmation.

Assuming it still fails, please try with --ca-certs=/dev/null and see if
that succeeds.

Thanks,
Matt

On Sat, Dec 13, 2014 at 6:21 AM, Yardena Cohen [email protected]
wrote:

Since updating Arch Linux, s3cmd fails to connect to any bucket with a dot
(.) in its name:

$ s3cmd info s3://buck.et
WARNING: Retrying failed request: /?location (hostname 'buck.et.s3.amazonaws.com' doesn't match either of '.s3.amazonaws.com', 's3.amazonaws.com')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?location (hostname 'buck.et.s3.amazonaws.com' doesn't match either of '
.s3.amazonaws.com', 's3.amazonaws.com')
WARNING: Waiting 6 sec...

This is because python 2.7.9 validates SSL certs by default. It exposes a
general problem with Amazon's wildcard cert. Note the certificate failure
by visiting anything like this in your browser:
https://buck.et.s3.amazonaws.com/

The solution may be to access things using this endpoint instead:
https://s3.amazonaws.com/buck.et/


Reply to this email directly or view it on GitHub
#437.

@mdomsch
Copy link
Contributor

mdomsch commented Dec 13, 2014

I installed a VM with arch linux and am trying. I see the failure, and it
is indeed because of the SSL certificate check. Working on it...

On Sat, Dec 13, 2014 at 7:45 AM, Matt Domsch [email protected] wrote:

Please try with github.com/mdomsch/s3cmd
branch bug/426-specify-SSL-CA-file. This is where we've started handling
the python 2.7.9 behavior change. I expect it'll be have similarly, but
would appreciate the confirmation.

Assuming it still fails, please try with --ca-certs=/dev/null and see if
that succeeds.

Thanks,
Matt

On Sat, Dec 13, 2014 at 6:21 AM, Yardena Cohen [email protected]
wrote:

Since updating Arch Linux, s3cmd fails to connect to any bucket with a
dot (.) in its name:

$ s3cmd info s3://buck.et
WARNING: Retrying failed request: /?location (hostname 'buck.et.s3.amazonaws.com' doesn't match either of '.s3.amazonaws.com', 's3.amazonaws.com')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?location (hostname 'buck.et.s3.amazonaws.com' doesn't match either of '
.s3.amazonaws.com', 's3.amazonaws.com')
WARNING: Waiting 6 sec...

This is because python 2.7.9 validates SSL certs by default. It exposes a
general problem with Amazon's wildcard cert. Note the certificate failure
by visiting anything like this in your browser:
https://buck.et.s3.amazonaws.com/

The solution may be to access things using this endpoint instead:
https://s3.amazonaws.com/buck.et/


Reply to this email directly or view it on GitHub
#437.

@mdomsch
Copy link
Contributor

mdomsch commented Dec 14, 2014

I merged my bug/426 branch into upstream master now.

I have one more patch which I think fixes your problem, at
https://github.com/mdomsch/s3cmd bug/check-certificate branch. It fixes
the --no-check-certificate flag so buckets like s3://bucket.example.com/
now work again. Please try with this branch and report success/failure.

Thanks,
Matt

On Sat, Dec 13, 2014 at 3:26 PM, Matt Domsch [email protected] wrote:

I installed a VM with arch linux and am trying. I see the failure, and it
is indeed because of the SSL certificate check. Working on it...

On Sat, Dec 13, 2014 at 7:45 AM, Matt Domsch [email protected] wrote:

Please try with github.com/mdomsch/s3cmd
branch bug/426-specify-SSL-CA-file. This is where we've started handling
the python 2.7.9 behavior change. I expect it'll be have similarly, but
would appreciate the confirmation.

Assuming it still fails, please try with --ca-certs=/dev/null and see if
that succeeds.

Thanks,
Matt

On Sat, Dec 13, 2014 at 6:21 AM, Yardena Cohen [email protected]
wrote:

Since updating Arch Linux, s3cmd fails to connect to any bucket with a
dot (.) in its name:

$ s3cmd info s3://buck.et
WARNING: Retrying failed request: /?location (hostname 'buck.et.s3.amazonaws.com' doesn't match either of '.s3.amazonaws.com', 's3.amazonaws.com')
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?location (hostname 'buck.et.s3.amazonaws.com' doesn't match either of '
.s3.amazonaws.com', 's3.amazonaws.com')
WARNING: Waiting 6 sec...

This is because python 2.7.9 validates SSL certs by default. It exposes
a general problem with Amazon's wildcard cert. Note the certificate failure
by visiting anything like this in your browser:
https://buck.et.s3.amazonaws.com/

The solution may be to access things using this endpoint instead:
https://s3.amazonaws.com/buck.et/


Reply to this email directly or view it on GitHub
#437.

@mdomsch
Copy link
Contributor

mdomsch commented Dec 14, 2014

http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
"When using virtual hosted–style buckets with SSL, the SSL wild card certificate only matches buckets that do not contain periods. To work around this, use HTTP or write your own certificate verification logic."

Ugh. So, maybe we have to also disable hostname checking if a bucket contains a '.' and we are using SSL. Automatically...

@mdomsch
Copy link
Contributor

mdomsch commented Dec 14, 2014

This is merged into the upstream master branch now. Please pull and try with that.

@yardenac
Copy link
Author

I made this to test: https://aur.archlinux.org/packages/s3cmd-git

It seems to work. Thanks!

@djmitche
Copy link

Any chance you'll release a new version with this fixed?

@mdomsch
Copy link
Contributor

mdomsch commented Dec 24, 2014

Releases are few and far between right now. I recommend using master branch
instead. Some day I may get enough known bugs fixed to do a release, but
not anytime soon.
On Dec 23, 2014 5:16 PM, "Dustin J. Mitchell" [email protected]
wrote:

Any chance you'll release a new version with this fixed?


Reply to this email directly or view it on GitHub
#437 (comment).

@djmitche
Copy link

Requiring hand installation is seriously limiting your userbase. It's a lot easier to convince downstreams to release a new packaged version if there is a new upstream release.

What prevents making a release? Certainly not known bugs - every release of every application ever has shipped with known bugs :)

@mdomsch
Copy link
Contributor

mdomsch commented Dec 24, 2014

The big thing right now, since signature v4 is in (so Frankfurt works), is
cleaning that up to not do 3x local disk I/O, and restoring python 2.4
compatibility. I have started both but with holidays, won't fix it in the
next couple weeks.
On Dec 23, 2014 7:39 PM, "Dustin J. Mitchell" [email protected]
wrote:

Requiring hand installation is seriously limiting your userbase. It's a
lot easier to convince downstreams to release a new packaged version if
there is a new upstream release.

What prevents making a release? Certainly not known bugs - every release
of every application ever has shipped with known bugs :)


Reply to this email directly or view it on GitHub
#437 (comment).

@djmitche
Copy link

I'll stop telling you your business once I click "Comment", but neither of those sounds like a showstopper to a release (in fact, at this point it's virtually impossible to install Python-2.4 on a system, so IMHO that's not even a worthwhile project).

@doismellburning
Copy link

I'd like to echo the comments from @djmitche about how a release would be much appreciated please.

If there have been breaking changes on master, is there any possibility of a point release of 1.0.x please? It'd be significantly helpful and appreciated. Thanks!

@doismellburning
Copy link

Ah, I've just seen that the current fix is conditionally disabling SSL hostname checking (i.e. effectively reverting to earlier Python behaviour) - I can totally see why you wouldn't want to backport that

@mdomsch
Copy link
Contributor

mdomsch commented Jan 5, 2015

The problem is, there isn't a better option. The wildcard cert that AWS S3
presents us allows for *.s3.amazonaws.com hostnames, not *.example.com or
even my.bucket.s3.amazonaws.com.

Another option would be to somehow look up the appropriate S3 endpoint
(e.g. eu-west-1.s3.amazonaws.com) for a given bucket and then rewrite all
s3://mybucket/path/to/file into s3://
eu-west-1.s3.amazonaws.com/mybucket/path/to/file and then use that
endpoint. That fixes a couple issues: SSL cert checking, and hitting the
region-specific endpoint rather than the generic (s3.amazonaws.com ==
us-east-1) endpoint like we currently do. The challenge with such rewrites
is it breaks all non-AWS-S3 instances in use (we can't blindly rewrite the
destination host, it could very well be a Walrus, Swift, or even fakes3
instance. whose host names we don't know).

On Mon, Jan 5, 2015 at 3:23 AM, Kristian Glass [email protected]
wrote:

Ah, I've just seen that the current fix is conditionally disabling SSL
hostname checking (i.e. effectively reverting to earlier Python behaviour)

  • I can totally see why you wouldn't want to backport that


Reply to this email directly or view it on GitHub
#437 (comment).

@djmitche
Copy link

djmitche commented Jan 5, 2015

Could we invent a syntax for users to supply that form? Then, rather
than disabling SSL, just fail with a pointer to the documentation for that
syntax.

@mdomsch
Copy link
Contributor

mdomsch commented Jan 5, 2015

To be clear, we're not disabling SSL, we're disabling hostname validation
for SSL - we'll accept any SSL certificate and use SSL encryption, just
without validating the host name we used to contact the server matches the
hostname the server's certificate claims to support.

On Mon, Jan 5, 2015 at 8:11 AM, Dustin J. Mitchell <[email protected]

wrote:

Could we invent a syntax for users to supply that form? Then, rather
than disabling SSL, just fail with a pointer to the documentation for that
syntax.


Reply to this email directly or view it on GitHub
#437 (comment).

@djmitche
Copy link

djmitche commented Jan 5, 2015

Sorry, yes, I misstated that.

@muff1nman
Copy link

Thanks @yardenac for packaging that up.

@wbl
Copy link

wbl commented Jan 15, 2015

Hi: If I understand correctly, someone who has a cert for evil.com will be able to take over any connection your product makes to s3 buckets. I think this is a major security flaw. The "right fix" would be to special-case the handling of only the relevant s3 domain names.

@mdomsch
Copy link
Contributor

mdomsch commented Jan 15, 2015

This is in fact exactly what it does.
# S3's wildcart certificate doesn't work with DNS-style named buckets.
if 's3.amazonaws.com' in hostname and http_connection.context:
http_connection.context.check_hostname = False

The hostname being connected to must be in s3.amazonaws.com (the default). So the attacker has to both compromise the target's DNS lookup method, and be able to respond as if they were s3.amazonaws.com.

If the python SSL hostname validation code would treat bucket.example.com.s3.amazonaws.com as matching *.s3.amazonaws.com, then we would not need to disable this test. However, that would then violate RFC 6125 6.4.3 point 2:
2. If the wildcard character is the only character of the left-most
label in the presented identifier, the client SHOULD NOT compare
against anything but the left-most label of the reference
identifier (e.g., *.example.com would match foo.example.com but
not bar.foo.example.com or example.com).

But this is exactly the format that Amazon expects the tools to use, and they like bucket names to be formatted like domain names (it helps provide universal uniqueness of names, and makes their CNAME resolution trivial when serving a bucket as a website).

@wbl
Copy link

wbl commented Jan 15, 2015

No, it disables hostname verification entirely when connecting to S3. This means anyone else can pretend to be S3, if I understand what the code is doing correctly, and offer any cert they have lying around. You should check the presented cert is in fact a wildcard cert with the offending domain name.

@mdomsch
Copy link
Contributor

mdomsch commented Jan 15, 2015

By the time the connection returns to me, the check has already been
completed in the SSL layer, and failed (if checking is enabled), or passed
(if it was disabled). There may be a way to add our own validation routine
at this point though.

On Wed, Jan 14, 2015 at 11:12 PM, Watson Ladd [email protected]
wrote:

No, it disables hostname verification entirely when connecting to S3. This
means anyone else can pretend to be S3, if I understand what the code is
doing correctly, and offer any cert they have lying around. You should
check the presented cert is in fact a wildcard cert with the offending
domain name.


Reply to this email directly or view it on GitHub
#437 (comment).

@wbl
Copy link

wbl commented Jan 15, 2015

Yeah, I think we were just talking past each other about what's going on. It seems that the ssl library in python will parse and return the cert the other side provided, so implementing a special case for S3 hostnames doesn't look that bad. https://docs.python.org/2/library/ssl.html#ssl.SSLSocket.getpeercert seems to be the key function involved.

@t-8ch
Copy link

t-8ch commented Jan 15, 2015

urllib3 would allow you to validate the certificate against a different hostname. (as soon as urllib3/urllib3#526 is fixed)

@mdomsch
Copy link
Contributor

mdomsch commented Jan 15, 2015

#458 has what I believe is a sane fix
for the problem. Reviews appreciated.

Thanks,
Matt

On Thu, Jan 15, 2015 at 1:53 AM, Thomas Weißschuh [email protected]
wrote:

urllib3 would allow you to validate the certificate against a different
hostname. (as soon as urllib3/urllib3#526
urllib3/urllib3#526 is fixed)


Reply to this email directly or view it on GitHub
#437 (comment).

@timcreatewell
Copy link

Hi, recently I updated python to 2.7.5 and now I am experiencing this issue (using s3cmd release 1.6.0). Is there something I can do to rectify this? Any help would be greatly appreciated!

@jamshid
Copy link

jamshid commented Jan 7, 2016

Not sure, @timcreatewell, but I haven't had problems since I started using a ~/.s3cfg like below, with check_ssl_certificate=False. Also try setting host_bucket the same as host_base -- I think this forces s3cmd to use "bucket in path" rather than "bucket in Host:" style access.

$ cat ~/.s3cfg
[default]
access_key = XXX
host_base = mybucket.cloud.example.com
host_bucket = mybucket.cloud.example.com
secret_key = secret
signature_v2=True
check_ssl_certificate = False
use_https = True

@timcreatewell
Copy link

Thanks @jamshid , will give that a go.

@mdomsch
Copy link
Contributor

mdomsch commented Jan 7, 2016

Try master branch please. There was a fix a few weeks ago needed due to a
2nd python SSL library change in the 2.7.x series. Ugh.

On Wed, Jan 6, 2016 at 7:21 PM, Tim Cromwell [email protected]
wrote:

Thanks @jamshid https://github.com/jamshid , will give that a go.


Reply to this email directly or view it on GitHub
#437 (comment).

@timcreatewell
Copy link

Thanks @mdomsch !

@ksingh7
Copy link

ksingh7 commented May 15, 2017

With check_ssl_certificate = True the following settings works too ( note host_base and host_bucket are same ) . Thanks @jamshid

$ cat ~/.s3cfg
[default]
access_key = XXX
host_base = ceph-s3.ml
host_bucket = ceph-s3.ml
secret_key = secret
check_ssl_certificate = True
use_https = True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants