3

I have 2 files compiled by django-pipeline along with s3boto: master.css and master.js. They are set to "Public" in my buckets. However, when I access them, sometimes master.css is served, sometimes it errs with SignatureDoesNotMatch. The same with master.js. This doesn't happen on Chrome. What could I be missing?

EDIT: It now happens on Chrome too.

yretuta
  • 7,963
  • 17
  • 80
  • 151

5 Answers5

13

Happened to me too... Took a few hours to find, but I figured it out eventually. Turns out that if the right signature is :

ssCNsAOxLf5vA80ldAI3M0CU2%2Bw=

Then AWS will NOT accept:

ssCNsAOxLf5vA80ldAI3M0CU2+w=

Where the only difference is the translation of %2B to '+'.

S3BotoStorage actually yields it correctly but the encoding happens on CachedFilesMixin in the final line of the url method (return unquote(final_url)). To fix it, I derived a new CachedFilesMixin to undo the "damage" (I should mention that I don't know why this unquote exists in the first place, so undoing it might cause other problems)

class MyCachedFilesMixin(CachedFilesMixin):
def url(self, *a, **kw):
    s = super(MyCachedFilesMixin, self).url(*a, **kw)
    if isinstance(s, unicode):
        s = s.encode('utf-8', 'ignore')
    scheme, netloc, path, qs, anchor = urlparse.urlsplit(s)
    path = urllib.quote(path, '/%')
    qs = urllib.quote_plus(qs, ':&=')
    return urlparse.urlunsplit((scheme, netloc, path, qs, anchor))

Where I used the code I found here.

Hope this helps...

Community
  • 1
  • 1
idanzalz
  • 1,740
  • 1
  • 11
  • 18
  • I'll try applying it, but can you verify this modification against this issue: http://stackoverflow.com/questions/12006894/amazon-s3-python-s3boto-403-forbidden-when-signature-has-sign – yretuta Sep 04 '12 at 22:27
  • it worked. however, as linked above, there may not be an issue with boto or not, and I certainly would not like to apply this patch everytime I want to use boto for a project. Could you look at the issue and see what you come up with? Thanks! – yretuta Sep 05 '12 at 02:09
  • the issue with boto is that it's not supposed to generate signatures with "spaces" and encode it as "+" signs. – yretuta Sep 05 '12 at 02:13
  • I don't think the problem is in boto, it seems to yield a correct signature. the problem is in django's CachedFilesMixin that converts %2B to '+'. I have a pull request in Django to remove the unquote call in the end of CachedFilesMixin.url – idanzalz Sep 05 '12 at 08:36
  • @idanzalz care to link to the pull request? I'd quite like to know if this makes it in. – Stuart Axon Sep 16 '13 at 10:37
  • unfortunately the pull request was not accepted. see here: https://code.djangoproject.com/ticket/18929 – idanzalz Sep 23 '13 at 22:08
2

I had a similar issue causing SignatureDoesNotMatch errors when downloading files using an S3 signed URL and the python requests HTTP library.

My problem ended up being a bad content-type. The documentation at AWS on Authenticating REST Requests helped me figure it out, and has examples in Python.

cbare
  • 12,060
  • 8
  • 56
  • 63
  • 1
    I was sending an `InMemoryUploadedFile` file and forgot to set the content-type. Everything is working fine once I've correctly set the content-type! Thanks for the tip. – dulaccc Oct 14 '14 at 10:15
2

I was struggling with this for a while, and I didn't like the idea of messing up with CachedFilesMixin (seemed like an overkill to me).

Until a proper fix is issued to the django platform, I've found quoting the signature two times is a good option. I know it's not pretty, but it works and it's simple.

So you'll just have to do something like this:

signature = urllib.quote_plus(signature.strip())
signature = urllib.quote_plus(signature.strip())

Hope it helps!

0

This article on Flask is a good resource on getting your signatures right: https://devcenter.heroku.com/articles/s3-upload-python

@app.route('/sign_s3/')
def sign_s3():
    AWS_ACCESS_KEY = os.environ.get('AWS_ACCESS_KEY_ID')       
    AWS_SECRET_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
    S3_BUCKET = os.environ.get('S3_BUCKET')

    object_name = request.args.get('s3_object_name')
    mime_type = request.args.get('s3_object_type')

    expires = int(time.time()+10)
    amz_headers = "x-amz-acl:public-read"

    put_request = "PUT\n\n%s\n%d\n%s\n/%s/%s" % (mime_type, expires, amz_headers, S3_BUCKET, object_name)

    signature = base64.encodestring(hmac.new(AWS_SECRET_KEY,put_request, sha).digest())
    signature = urllib.quote_plus(signature.strip())

    url = 'https://%s.s3.amazonaws.com/%s' % (S3_BUCKET, object_name)

    return json.dumps({
        'signed_request': '%s?AWSAccessKeyId=%s&Expires=%d&Signature=%s' % (url, AWS_ACCESS_KEY, expires, signature),
         'url': url
    })
Philip Nuzhnyy
  • 4,630
  • 1
  • 25
  • 17
0

Simple workaround for me was to generate a new access key with only alphanumeric characters (ie no special characters such as "/", "+", etc. which AWS sometimes adds to the keys).