Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assetlibrary History Lambda Cost Increase #191

Open
1 of 2 tasks
aaronatbissell opened this issue Feb 22, 2024 · 3 comments
Open
1 of 2 tasks

Assetlibrary History Lambda Cost Increase #191

aaronatbissell opened this issue Feb 22, 2024 · 3 comments

Comments

@aaronatbissell
Copy link
Contributor

aaronatbissell commented Feb 22, 2024

Aws Connected Device Framework Affected Module(s):

assetlibrary-history

I'm submitting a ...

  • bug report
  • feature request

Description:

It appears that the recent update to assetlibrary history that increased the lambda function size from 128MB to 512MB has increased our lambda cost by about $80/day.

We are using about 12,400,000S per day. At a cost of 0.0000166667 per GB-S, the costs are as-follows based on your lambda size:

  • 128MB
    • 12400000 * 128/1024 * 0.0000166667 = ~$25 per day
  • 512MB
    • 12400000 * 512/1024 * 0.0000166667= ~$103 per day

It appears as though the increase in lambda memory size hasn't decreased the runtime of the lambda significantly enough to decrease the cost back to normal levels. I think this is probably because the history lambda is processing single records and 90+% of the lambda runtime is just loading Node, dependencies, etc. Very little time is spent actually processing the request.

Current behavior:

Lambda costs increased

Expected behavior:

Lambda costs shouldn't increase exponentially

Steps to reproduce:

Additional Information:
The lambda is using ~240MB of memory per invocation so bringing the memory back down to 128MB is not an option and 256MB seems too close to the limit for comfort. I believe this is very closely tied to #87 describes a similar problem with device monitoring and #88 would decrease the effect of this problem greatly.

@aaronatbissell
Copy link
Contributor Author

Wanted to keep this thread up to date. It looks like the provisioned capacity is another big reason why our costs are so high. The current read/write capacity is too low (currently 5). Increasing this to 10 helped significantly, but this has to be done manually because this value isn't exposed through the config. #192 should take care of that

@anish-kunduru
Copy link
Contributor

@aaronatbissell Out of curiosity: have you tried testing with 768 or even 1028?

If the problem is that cold starts are taking so long that subsequent requests can cause additional lambdas to spin up, that might help reduce costs. If this isn't the case, and you're blocked by I/O, it'll actually cost more.

@aaronatbissell
Copy link
Contributor Author

About 80% of this problem was due to the provisioned capacity problem I mentioned above. It appears that when we run into provisioned capacity problems, the reads/writes take a long time to fail, which increases the duration of the lambda. This increase in duration caused a major lambda cost increase when it's running millions of times per day.

The other 20% I think is due to cold starts, which I'm hoping to take care of with #88

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants