-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lambda Layers - New Version Every Run #25647
Comments
Remove the source_code_hash - then it is only published once it has been changed. Note: If you remove source_code_hash then I expect that it publishes a new version. However, the second apply (and subsequent ones) will not unless your loading process overwrites the existing s3 object (does not matter if it has changed or not). Generally I recommend to get rid of the external asynchronous process for putting the package on S3 and all do it through Terraform. It can introduce all sorts of problems (e.g. updates where there are no updates, unclear if security patches have been deployed or not etc.) |
@jornfranke If you simple remove the I'm aware that it would be beneficial to have the process generating the layer source code as part of the main Terraform collection here, but that's just not possible with this particular application. The underlying method in use here does not require a source hash for comparison - |
I cannot confirm this behaviour you describe. With which version did you test? |
I'm testing with version 1.2.4 Your comment was:
If this is the very first apply then sure - My whole point with this issue is that Clearly I have a workaround. I'm using it in production. It's fine. But I'd like to see |
Have you tested this? If you put the the object on S3 via s3_bucket_object resource it all works as it should be - there is no need for source_code_hash and also Terraform does not apply it if it does not change, but only if it changes. If you put the object outside Terraform - which in any case is a very bad idea (not reproducible pipelines, unclear what is deployed etc.) - you could try the s3_bucket_object datasource. However, I would advise against changing deployments outside the control of the pipeline. |
I feel like we're drifting off topic here. My whole point is that |
I assume the line should be removed
It has been there since initial implementation of Please prioritize. |
@justinretzolk Can you assign someone of your team to review this little PR #32535 Thanks ! |
…lia-patch-1 Lambda Layers - New Version Every Run #25647
This functionality has been released in v5.13.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Terraform CLI and Terraform AWS Provider Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
Debug output can be provided upon request, but since I have no idea how to scrub all the various tokens and other sensitive info, I don't want to just broadcast it to the world.
Panic Output
None
Expected Behavior
After first
apply
, secondapply
should generate no changes.Actual Behavior
Each
apply
creates a new version, reporting that thesource_code_hash
has changed, even though it has not.Steps to Reproduce
terraform apply
Important Factoids
We have external processes that update the ZIP package on S3 asynchronously. We rely on the S3 object etag to let us know when that package has been updated. My assumption is that AWS decompresses the archive file, and compiles a SHA256 checksum of the resulting image/fileset. This will never match a checksum for the compressed archive file.
Either way, I can't seem to find a way to make the checksums match. I'm hoping there's a way to store the
source_code_hash
provided to theaws_lambda_layer_version
with the project state, and compare that each time (rather than relying on the AWS provided checksum).As a workaround, I've been using the S3 object ETag hash to populate the layer description, which accomplishes the desired functionality. It would be ideal though to be able to rely on the
source_code_hash
.References
The text was updated successfully, but these errors were encountered: