Precompiled packages for AWS lambda
- https://aws.amazon.com/lambda/ and create/sign in into account
- Lambda > Functions - Create lambda function
- Blank function
- Configure triggers - Next
- Configure function
- Runtime - Python 2.7
- Lambda function handler and role
- Handler - service.handler
- Role - Create new role from template(s)
- Role name - test
- Policy templates - Simple Microservice Permissions
- Advanced settings
- Memory (MB) 128
- Timeout 1 min 0 sec
- Code entry type - Upload a .ZIP file - choose Pack.zip from rep
- Test -> Save and test
- Modify service.py file from sources folder
- Choose all files in sources folder when compressing, don't put it in one folder
- Upload zip file on function page
Selenium on PhantomJS. In fact - a ready-made tool for web scraping. For example, the demo now opens a random page in Wikipedia and sends its header. (PhantomJS at the same time disguises itself as a normal browser, knows how to log in, click and fill out forms) Also added requests, so you can do API requests for different resources to discard / take away the information.
Useful for web testing and scraping.
Current demo opens random page from wiki (https://en.wikipedia.org/wiki/Special:Random) and prints title.
git clone https://github.com/ryfeus/lambda-packs.git
cd lambda-packs/Selenium_PhantomJS/source/
serverless deploy
serverless invoke --function main --log
You can also see the results from the API Gateway endpoint in a web browser.
https://selenium-python.readthedocs.io/
What does the lambda have to do with it? In a nutshell on AWS in one region you can simultaneously run 200 lambdas (more if you write to support). Lambda works in 11 regions. So you can run in parallel more than 2000 lambdas, each of which will conduct load testing of your service. Five minutes of such testing will cost just one dollar.
Demo in this package tries to send requests to github.com for 5 seconds with 1 connection and also conduct pyresttest dummy test.
- WRK (https://github.com/wg/wrk) - the main tool for load testing. It works with multiple threads, you can specify the number of connections and length of the load. For more fine-tuning, you can use LuaJIT scripts (https://www.lua.org/).
- Pyrestest (https://github.com/svanoort/pyresttest) is a handy tool for testing the full pipeline of the API. For example, the user registers, then uses the api key to create tasks / make notes / downloads files, then reads them, then deletes them.
https://github.com/wg/wrk
https://github.com/svanoort/pyresttest
Package for parsing static HTML pages. Difference here is that it works faster and consumes less memory than PhantomJS but is limited in terms websites it can parse and other features.
serverless install -u https://github.com/ryfeus/lambda-packs/tree/master/Lxml_requests/source -n lxml-requests
cd lxml-requests
serverless deploy
serverless invoke --function main --log
wget https://github.com/ryfeus/lambda-packs/blob/master/Lxml_requests/buildPack.sh
docker pull amazonlinux:latest
docker run -v $(pwd):/outputs --name lambdapackgen -d amazonlinux:latest tail -f /dev/null
docker exec -i -t lambdapackgen /bin/bash /outputs/buildPack.sh
Lxml 3.7.1
Open source library for Machine Intelligence. Basically revolutionized AI and made it more accessible. Using tensorflow on lambda is not as bad as it may sound - for some simple models it is the simplest and the cheapest way to deploy.
As hello world code I used recognition of images trained on imagenet (https://www.tensorflow.org/tutorials/image_recognition). Given the price tag lambda one run (recognition of one picture) will cost $0.00005. Therefore for a dollar you can recognize 20,000 images. It is much cheaper than almost any alternatives, though completely scalable (200 functions can be run in parallel), and can be easily integrated into cloud infrastructure. Current demo downloads image from link 'imagelink' from event source ( if empty - then downloads https://s3.amazonaws.com/ryfeuslambda/tensorflow/imagenet/cropped_panda.jpg)
Tensorflow 1.4.0
https://www.tensorflow.org/tutorials/image_recognition
Nightly version archive is more than 50 MB in size but it is still eligible for using with AWS Lambda (though you need to upload pack through S3). For more read here:
https://hackernoon.com/exploring-the-aws-lambda-deployment-limits-9a8384b0bec3
serverless install -u https://github.com/ryfeus/lambda-packs/tree/master/tensorflow/source -n tensorflow
cd tensorflow
serverless deploy
serverless invoke --function main --log
for Python2:
wget https://raw.githubusercontent.com/ryfeus/lambda-packs/master/Tensorflow/buildPack.sh
wget https://raw.githubusercontent.com/ryfeus/lambda-packs/master/Tensorflow/index.py
docker pull amazonlinux:latest
docker run -v $(pwd):/outputs --name lambdapackgen -d amazonlinux:latest tail -f /dev/null
docker exec -i -t lambdapackgen /bin/bash /outputs/buildPack.sh
for Python3:
wget https://raw.githubusercontent.com/ryfeus/lambda-packs/master/Tensorflow/buildPack_py3.sh
wget https://raw.githubusercontent.com/ryfeus/lambda-packs/master/Tensorflow/index_py3.py
docker pull amazonlinux:latest
docker run -v $(pwd):/outputs --name lambdapackgen -d amazonlinux:latest tail -f /dev/null
docker exec -i -t lambdapackgen /bin/bash /outputs/buildPack_py3.sh
Note: Remember You should set
python3.6
for AWS Lambda function environment.
arn:aws:lambda:us-east-1:339543757547:layer:tensorflow-pack
Package for fans of machine learning, building models and the like. I doubt that there is a more convenient way to deploy model to the real world.
- Scikit-learn 0.17.1
- Scipy 0.17.0
Package of image processing tools, and not only to style image, but also a large set of computer vision algorithms.
There are currently two zipped packs available, Pack.zip and Pack_nomatplotlib.zip, you probably want to use Pack_nomatplotlib.zip. See ryfeus#5 for more information.
Scikit-image 0.12.3
Another package of image processing tools, and not only to style image, but also a large set of Computer vision algorithms.
- OpenCV 3.1.0
- PIL 4.0.0
https://pillow.readthedocs.io/
http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_tutorials.html
Package for fans of statistics, data scientists and data engineers. RAM at lambda is 1.5 gigabytes, and the maximum operating time - 5 minutes. I am sure that will be enough for most tasks.
Pandas 0.19.0
Opensource library for Natural Language Processing in python.
- Spacy 2.0.11
Example code loads language model from S3 and uses it to analyze sentence.
OCR (optical character recognition) library for text recognition from the image.
https://github.com/tesseract-ocr/tesseract
PDF generator + Microsoft office file generator (docx, xlsx, pptx) + image generator (jpg, png) + book generator (epub)
"Hello world" code in package creates example of every document. Basically these libs are low memory (less than 128MB) and high speed (less than 0.5 seconds) so it's something like ~1m documents generated per 1$ in terms of AWS Lambda pricing.
- docx (python-docx - https://pypi.python.org/pypi/python-docx)
- xlsx (XlsxWriter - https://pypi.python.org/pypi/XlsxWriter)
- pptx (python-pptx - https://pypi.python.org/pypi/python-pptx)
- pdf (Reportlab - https://pypi.python.org/pypi/reportlab)
- epub (EbookLib - https://pypi.python.org/pypi/EbookLib)
- png/jpg/... (Pillow - https://pypi.python.org/pypi/Pillow)
AWS Lambda pack in Python for processing satellite imagery. Basically it enables to deploy python code in an easy and cheap way for processing satellite imagery or polygons. In “hello world” code of the pack I download red, green, blue Landsat 8 bands from AWS, make True Color image out of it and upload it to S3. It takes 35 seconds and 824MB of RAM for it so ~2500 scenes can be processed for 1$.
- Rasterio (https://github.com/mapbox/rasterio 0.36)
- OSGEO (https://trac.osgeo.org/gdal/wiki/GdalOgrInPython)
- Pyproj (https://github.com/jswhit/pyproj)
- Shapely (https://github.com/Toblerity/Shapely)
- PIL (https://pillow.readthedocs.io/)
Python 3.6 based PyTorch
- PyTorch 1.0.1 (CPU)
- torchvision 0.2.1
- numpy-1.16.1
- pillow-5.4.1
- six-1.12.0
- torchvision-0.2.1
# You need `docker` before run
./build-with-docker.sh