Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.cache() doesn't take header changes into account #73

Open
JamesMcMahon opened this issue Jun 19, 2015 · 5 comments
Open

.cache() doesn't take header changes into account #73

JamesMcMahon opened this issue Jun 19, 2015 · 5 comments

Comments

@JamesMcMahon
Copy link
Contributor

When changing headers on objects the files are not republished if passed through cache(). For example, changing the cache-control on an object result in the object being marked as "skipped" during the publish() call.

Might be doing something wrong but it seems like this is an omission with the current behavior. Thoughts?

@pgherveou
Copy link
Owner

Yes you are right. You will have to flush the cache file
we could create a hash for the headers and append it to the filename to
make it more bullet proof
If you have some time feel free to hack on it and send a PR

On Fri, Jun 19, 2015 at 10:54 AM, James F McMahon [email protected]
wrote:

When changing headers on objects the files are not republished if passed
through cache().

Might be doing something wrong but it seems like this is an omission with
the current behavior. Thoughts?


Reply to this email directly or view it on GitHub
#73.

PG

@JamesMcMahon
Copy link
Contributor Author

Thanks for the answer. If I get a chance I will take a look. For now I will just disable cache() in my personal project to avoid this issue.

For now consider this an open feature request.

@TheCloudlessSky
Copy link

👍 I ran into this today. It appears to not work regardless of using cache() since the S3 and local ETag are compared and if they're equivalent, the state of the file is set to skip. My only workaround is to use the force option.

From my quick look it looks like the following needs to be done:

@pgherveou Can you confirm this? I may be able to hack a PR together... but my experience with gulp plugins is limited 😄.

@pgherveou
Copy link
Owner

yes I think you got it all,
you can add an option on the cache stream that specify headers to be checked, (or just a boolean depending of your needs)

the cache file could look like that and you would compare all keys with the file in the stream

{
  "<file.s3.path>":  { 
    "etag": file.s3.etag, 
    "header1": "header1 value", 
    "header2": "header2 value"
   }
}

@yvele
Copy link

yvele commented Apr 27, 2016

Hi, why don't you hash all headers in a single cache key?
I came with the same issue and because I needed a more modularized approach I created https://github.com/yvele/poosh it's working nicely and you can maybe look at it to fix your issue.

Poosh cache file stores 3 separate hash keys for each processed local file:

  • File content hash key (MD5 + format suffix)
  • HTTP headers hash key (CRC32)
  • Remote options hash key (CRC32)

Each cache file line looks like this:

{
  "id":"url",
  "c":"c5766b5257229169dfd571088341a817GZ", // content + gzip suffix
  "h":"645dac9a", // HTTP headers
  "r":"68227929" // specific remote options
}

This is my Poosh function I use to hash headers and remote options: https://github.com/yvele/poosh/blob/master/packages/poosh-core/src/helpers/options/hash.js

  • Hash is performed regardless of keys order.
  • Hash ignores undefined and null values.
  • Hash is NOT performed deeply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants