Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation: Running locally #77

Closed
Walther opened this issue Jan 26, 2019 · 10 comments · Fixed by #460
Closed

Documentation: Running locally #77

Walther opened this issue Jan 26, 2019 · 10 comments · Fixed by #460
Labels
enhancement New feature or request

Comments

@Walther
Copy link

Walther commented Jan 26, 2019

Trying to run locally:

cargo run --example=basic
  Downloaded simple-error v0.1.13
   Compiling simple-error v0.1.13
   Compiling simple_logger v1.0.1
   Compiling lambda_runtime v0.2.0 (/Users/walther/git/aws-lambda-rust-runtime/lambda-runtime)
    Finished dev [unoptimized + debuginfo] target(s) in 3.87s
     Running `/Users/walther/git/aws-lambda-rust-runtime/target/debug/examples/basic`
thread 'main' panicked at 'Could not find runtime API env var: environment variable not found', lambda-runtime-core/src/runtime.rs:79:13
note: Run with `RUST_BACKTRACE=1` for a backtrace.

It would be useful to have some documentation on how to run Rust-based Lambda functions locally, for development & testing purposes.

@softprops
Copy link
Contributor

Hi @Walther,

You have a few options depending on what kind of testing you want to perform.

For local integration testing, I've added some examples of running instances locally inside a docker container under the docker section, an emulator for the lambda runtime you'd be deploying to. This leverages the lambci project. You can find more examples here

I'm working on a ( not yet announced ) way to make these easy gateway functions here. Limitations I've bumped up against in serverless framework have been mainly that local invocation is limited to node/python runtimes. As this runtime uses the new custom provided runtime as an execution env there are less options close at hand. One of the motivating factors is to enable this for serverless framework. Again, lambci to the rescue! It was very easy to extend serverless framework to enable add this.

For unit testing, there's nothing that different from unit testing any other rust code. You can find a small example of that here. The key to enabling this is to separate your hander function from the call to lambda!

@flukejones
Copy link

@Walther this is an example of how I run a lambda locally:

cat test-data/input.json | sudo docker run -i --rm \
-v ${PWD}/target/x86_64-unknown-linux-musl/debug:/var/task \
-e AWS_DEFAULT_REGION=${AWS_REGION_DEV} \
-e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID_DEV} \
-e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY_DEV} \
-e region="ap-south-1" \
-e DOCKER_LAMBDA_USE_STDIN=1 \
lambci/lambda:provided handler

Note that things like AWS_REGION_DEV or *_DEV are my local env vars I require. You can replace those with what you need. The example is good for piping in a json file to quickly test things. The -e are environment vars in case it wasn't obvious, here you would add your own API var.

I use a command string like the above to test a Lambda that runs from a Cognito PreSignUp trigger and so requires a set of env vars to get the right credentials etc in to it.

Also worth noting is that I use a docker image (search ekidd rust musl) for the Rust musl target, this makes it much easier to deploy to Lambdas without worry of GLIBC versions being incorrect. Or you can use the docker image that @softprops has provided 😺

@y2kappa
Copy link

y2kappa commented May 24, 2020

@Walther this is an example of how I run a lambda locally:

cat test-data/input.json | sudo docker run -i --rm \
-v ${PWD}/target/x86_64-unknown-linux-musl/debug:/var/task \
-e AWS_DEFAULT_REGION=${AWS_REGION_DEV} \
-e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID_DEV} \
-e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY_DEV} \
-e region="ap-south-1" \
-e DOCKER_LAMBDA_USE_STDIN=1 \
lambci/lambda:provided handler

Note that things like AWS_REGION_DEV or *_DEV are my local env vars I require. You can replace those with what you need. The example is good for piping in a json file to quickly test things. The -e are environment vars in case it wasn't obvious, here you would add your own API var.

I use a command string like the above to test a Lambda that runs from a Cognito PreSignUp trigger and so requires a set of env vars to get the right credentials etc in to it.

Also worth noting is that I use a docker image (search ekidd rust musl) for the Rust musl target, this makes it much easier to deploy to Lambdas without worry of GLIBC versions being incorrect. Or you can use the docker image that @softprops has provided 😺

this fails for large post request bodies. We need to be able to develop locally without the pain of setting up docker.

@softprops
Copy link
Contributor

Just a quick heads up. I've updated the readme of lambda-rust project with details on how to run your lambda in a "stay open mode" in a container https://github.com/softprops/lambda-rust/blob/master/README.md#-local-testing

This had a key advantage I'm that approach is that you aren't spinning up one off docker containers for every invocation and that you can use curl or the aws cli to invoke your lambda.

I'm curious to learn what you meant when you mentioned failing for large payloads. Can you describe your example in more detail? I've never had that happen but I personally haven't had many usecases where payloads were unusually large. I'd like to take note of how to reproduce that case so see if there's anything that can be fixed.

I'm also curious about the pain your feeling with setting up docker. Here is a resource for a relatively painless setup https://docs.docker.com/engine/install/. I'm interested in helping to add documentation to make that process easier to reduce that friction.

There is an alternative here where you can also leverage the idea that a lambda is just a fn which can be called as regular rust code from a non lambda runtime based main.

In most cases you could have a separate bin that runs a main that just provides your function a serde json value directly for a lower friction integration testing. The caveat being that it's ever more distanced from the runtime the lambda will be executing under in production.

@y2kappa
Copy link

y2kappa commented May 25, 2020

Did not mean to sound too harsh, I am just describing the existential crisis I am going through.

Apart from toy examples where you can call it manually for docker using small jsons, there are other use cases where you need a good integration, to mimick production usage as it is - calling it at an http endpoint with get parameters and post payloads.

What I have tried:

  1. The serverless-offline plugin does not work with provided runtime.

  2. Your serverless-localhost plugin only works with small jsons. See a more detailed example of the error that I am getting here: "argument list too long" when passing a large custom event lambci/docker-lambda#64 (comment) The error fails with argument list too large

  3. I guess an example, which I have not tried, would be to use DOCKER_LAMBDA_USE_STDIN and pipe the input in the command line. But that's not useful when you need it to work as in production behind an http endpoint. I don't need it to read from a hard coded json file. I need it to respond to dynamic requests so I can develop against it.

To give you some background, I am trying to port a few python lambdas to rust. I have an android app running locally that calls the locally deployed python function at localhost:3000/endpoint.

I've tried to set this up for about 2 weeks and I though I am doing something wrong or my knowledge is too limited - but I don't think it should be this hard. For python and javascript the experience of running this locally is seamless. I don't mind the learning curve that comes with learning about all of these things, but I would rather spend my time learning rust rather than setting up an extremely customised local development environment.

Another thing about docker is that it needs to communicate with other docker containers through a network bridge which is again extra overhead to take care of. The current solution for serverless dynamodb just works out of the box. For it to work with a lambda docker it needs to be as well in a docker container.

Another issue with docker in the serverless localhost case is that if lambda panics and crashes there are no logs, it won't pipe it to stdout/stderr and you need to manually stop the container yourself to realise that it happened.

At this point I gave up with docker and I created a basic server using hyper which converts the input an enum and also had the lamda function convert its input to an enum and have a common handler to handle that input. It seems I can develop locally fine for now but I am still nervous about deploying to production (which takes a while to be packaged and uploaded, making the feedback loop long) because I am not sure of all the input types that lambda can take and if my convertors handle that.

I can see that aws lambda in rust is at the beginning of the road given that there are virtually no examples online of running this locally with a painless setup and if you think this can be improved I am happy to help, but if I am doing something fundamentally wrong, then I am happy to learn.

@softprops
Copy link
Contributor

Thanks for the feedback. No harshness taken!

A bit of context on my part

The serverless localhost was a poc plugin I made. I also couldn't get the serverless offline to work because it didn't support lambdas using the provided runtime. This might have changed but I have been more invested in the changes in the runtime itself than the tooling around it. The next release represents a big but very useful set of changes. I plan on refocusing some engergy back to tooling soon. The big change I'd make to serverless localhost is to spin up on one container for multiple invocations rather than one perinvocation. I believe the one off container is the approach of serverless framework and Sam. I think there's a better way

Related, I started looking into aws Sam support which is the aws "official" tooling for this kind of thing. I'm honestly not a Sam user myself but am dedicating some attention to it to broaden my perspective and to increase the reach of rust to a wider audience. Some challenges with Sam is cross compilation on local host for non linux users. This is largely the biggest driver for the docker approach I mentioned it above. This has less to with this lambda product than it does to the state of cross compilation on ease I'm rust. The reason this is needed is that rust binaries target is platforms. A binary you compile on osx or windows won't work on Linux. Since lambda runs linux, rust binaries need to be compiled for that as a target. I believe this is the root of all of the current friction but until there's an easy path in rust we'll have to find some ways to accommodate it!

Your project sounds neat btw! You're helping to innovate the solution space. Stick with it! Serverless framework and Sam don't need to be the only tools in the lambda shed

@dsilva
Copy link

dsilva commented May 26, 2020

@softprops FWIW, I've been deploying with sam and building with musl-cross on a mac like this:

export TARGET_CC=x86_64-linux-musl-gcc
export CC_x86_64_unknown_linux_musl=x86_64-linux-musl-gcc
export RUSTFLAGS="$RUSTFLAGS -Clinker=x86_64-linux-musl-gcc"
cargo build --target x86_64-unknown-linux-musl --release -p $THE_LAMBDA_FN
cp target/x86_64-unknown-linux-musl/release/$THE_LAMBDA_FN target/lambda/$THE_LAMBDA_FN/bootstrap

and then pointing to target/lambda/$THE_LAMBDA_FN as the CodeUri property in the sam template.yaml.

Putting that in a script, then running cargo watch -s ./build.sh in one terminal and sam local start-api --skip-pull-image in another works pretty well. Cargo rebuilds on any source changes, and sam-local picks up new build outputs.

@softprops
Copy link
Contributor

Yep I've become familiar with the recipe. My goal is to get sam deploy to work with more or less less incantation magic is possible as you you get with other runtimes. Rust just has more os dependent assumptions atm. I'm not sure what this looks like on windows form example atm

@softprops
Copy link
Contributor

I opened a pull to get this started with the sam folks but they might be less interested in officially supporting rust at this time aws/aws-lambda-builders#174 (comment)

@jkshtj jkshtj added the enhancement New feature or request label Jan 21, 2021
@brainstorm
Copy link

brainstorm commented Oct 25, 2021

I managed to run it locally with a few workarounds on https://github.com/umccr/s3-rust-noodles-bam, namely:

$ docker build -t provided.al2-rust . -f Dockerfile-provided.al2
$ sam build -c -u --skip-pull-image -bi provided.al2-rust
$ sam deploy

That being said, there are drawbacks being worked on right now: aws-samples/serverless-rust-demo#4

And some complications with newer computers, such as the Apple Silicon M1, see: briansmith/ring#1332 (which the approach above is slow, 6 min build, but immune for now, so it's usable).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants