-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation: Running locally #77
Comments
Hi @Walther, You have a few options depending on what kind of testing you want to perform. For local integration testing, I've added some examples of running instances locally inside a docker container under the docker section, an emulator for the lambda runtime you'd be deploying to. This leverages the lambci project. You can find more examples here I'm working on a ( not yet announced ) way to make these easy gateway functions here. Limitations I've bumped up against in serverless framework have been mainly that local invocation is limited to node/python runtimes. As this runtime uses the new custom provided runtime as an execution env there are less options close at hand. One of the motivating factors is to enable this for serverless framework. Again, lambci to the rescue! It was very easy to extend serverless framework to enable add this. For unit testing, there's nothing that different from unit testing any other rust code. You can find a small example of that here. The key to enabling this is to separate your hander function from the call to |
@Walther this is an example of how I run a lambda locally: cat test-data/input.json | sudo docker run -i --rm \
-v ${PWD}/target/x86_64-unknown-linux-musl/debug:/var/task \
-e AWS_DEFAULT_REGION=${AWS_REGION_DEV} \
-e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID_DEV} \
-e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY_DEV} \
-e region="ap-south-1" \
-e DOCKER_LAMBDA_USE_STDIN=1 \
lambci/lambda:provided handler Note that things like I use a command string like the above to test a Lambda that runs from a Cognito PreSignUp trigger and so requires a set of env vars to get the right credentials etc in to it. Also worth noting is that I use a docker image (search ekidd rust musl) for the Rust musl target, this makes it much easier to deploy to Lambdas without worry of GLIBC versions being incorrect. Or you can use the docker image that @softprops has provided 😺 |
this fails for large post request bodies. We need to be able to develop locally without the pain of setting up docker. |
Just a quick heads up. I've updated the readme of lambda-rust project with details on how to run your lambda in a "stay open mode" in a container https://github.com/softprops/lambda-rust/blob/master/README.md#-local-testing This had a key advantage I'm that approach is that you aren't spinning up one off docker containers for every invocation and that you can use curl or the aws cli to invoke your lambda. I'm curious to learn what you meant when you mentioned failing for large payloads. Can you describe your example in more detail? I've never had that happen but I personally haven't had many usecases where payloads were unusually large. I'd like to take note of how to reproduce that case so see if there's anything that can be fixed. I'm also curious about the pain your feeling with setting up docker. Here is a resource for a relatively painless setup https://docs.docker.com/engine/install/. I'm interested in helping to add documentation to make that process easier to reduce that friction. There is an alternative here where you can also leverage the idea that a lambda is just a fn which can be called as regular rust code from a non lambda runtime based main. In most cases you could have a separate bin that runs a main that just provides your function a serde json value directly for a lower friction integration testing. The caveat being that it's ever more distanced from the runtime the lambda will be executing under in production. |
Did not mean to sound too harsh, I am just describing the existential crisis I am going through. Apart from toy examples where you can call it manually for docker using small jsons, there are other use cases where you need a good integration, to mimick production usage as it is - calling it at an http endpoint with get parameters and post payloads. What I have tried:
To give you some background, I am trying to port a few python lambdas to rust. I have an android app running locally that calls the locally deployed python function at localhost:3000/endpoint. I've tried to set this up for about 2 weeks and I though I am doing something wrong or my knowledge is too limited - but I don't think it should be this hard. For python and javascript the experience of running this locally is seamless. I don't mind the learning curve that comes with learning about all of these things, but I would rather spend my time learning rust rather than setting up an extremely customised local development environment. Another thing about docker is that it needs to communicate with other docker containers through a network bridge which is again extra overhead to take care of. The current solution for Another issue with docker in the At this point I gave up with docker and I created a basic server using hyper which converts the input an enum and also had the lamda function convert its input to an enum and have a common handler to handle that input. It seems I can develop locally fine for now but I am still nervous about deploying to production (which takes a while to be packaged and uploaded, making the feedback loop long) because I am not sure of all the input types that lambda can take and if my convertors handle that. I can see that aws lambda in rust is at the beginning of the road given that there are virtually no examples online of running this locally with a painless setup and if you think this can be improved I am happy to help, but if I am doing something fundamentally wrong, then I am happy to learn. |
Thanks for the feedback. No harshness taken! A bit of context on my part The serverless localhost was a poc plugin I made. I also couldn't get the serverless offline to work because it didn't support lambdas using the provided runtime. This might have changed but I have been more invested in the changes in the runtime itself than the tooling around it. The next release represents a big but very useful set of changes. I plan on refocusing some engergy back to tooling soon. The big change I'd make to serverless localhost is to spin up on one container for multiple invocations rather than one perinvocation. I believe the one off container is the approach of serverless framework and Sam. I think there's a better way Related, I started looking into aws Sam support which is the aws "official" tooling for this kind of thing. I'm honestly not a Sam user myself but am dedicating some attention to it to broaden my perspective and to increase the reach of rust to a wider audience. Some challenges with Sam is cross compilation on local host for non linux users. This is largely the biggest driver for the docker approach I mentioned it above. This has less to with this lambda product than it does to the state of cross compilation on ease I'm rust. The reason this is needed is that rust binaries target is platforms. A binary you compile on osx or windows won't work on Linux. Since lambda runs linux, rust binaries need to be compiled for that as a target. I believe this is the root of all of the current friction but until there's an easy path in rust we'll have to find some ways to accommodate it! Your project sounds neat btw! You're helping to innovate the solution space. Stick with it! Serverless framework and Sam don't need to be the only tools in the lambda shed |
@softprops FWIW, I've been deploying with sam and building with musl-cross on a mac like this: export TARGET_CC=x86_64-linux-musl-gcc
export CC_x86_64_unknown_linux_musl=x86_64-linux-musl-gcc
export RUSTFLAGS="$RUSTFLAGS -Clinker=x86_64-linux-musl-gcc"
cargo build --target x86_64-unknown-linux-musl --release -p $THE_LAMBDA_FN
cp target/x86_64-unknown-linux-musl/release/$THE_LAMBDA_FN target/lambda/$THE_LAMBDA_FN/bootstrap and then pointing to Putting that in a script, then running |
Yep I've become familiar with the recipe. My goal is to get |
I opened a pull to get this started with the sam folks but they might be less interested in officially supporting rust at this time aws/aws-lambda-builders#174 (comment) |
I managed to run it locally with a few workarounds on https://github.com/umccr/s3-rust-noodles-bam, namely:
That being said, there are drawbacks being worked on right now: aws-samples/serverless-rust-demo#4 And some complications with newer computers, such as the Apple Silicon M1, see: briansmith/ring#1332 (which the approach above is slow, 6 min build, but immune for now, so it's usable). |
Trying to run locally:
It would be useful to have some documentation on how to run Rust-based Lambda functions locally, for development & testing purposes.
The text was updated successfully, but these errors were encountered: