AWS Lambda is a serverless computing platform provided by Amazon as a part of Amazon Web Services. It is an event-driven computing service which means it runs code in response to events and automatically manages the computing resources required by that code. Serverless has become the most used deployment pattern for cloud applications. In this field, AWS Lambda is a very well known player. Perfomatix is a top Cloud Software Development pioneers in cutting edge cloud solutions.
Think about software development in a virtual environment, we just need to upload code and Lambda takes care of everything required to run and scale-out code with high availability. Code could be automatically triggered via other AWS services or could be used as a customized functionality used by any web or mobile application.
Lambda Core Features
- Scalability- Users have to pay by Running amount and CPU usage.
- Short Computational Code- to run code snippets in the cloud for serverless computing and continuing auto-scaling.
- Development environment creation and replication.
- Provides monitoring and notification service.
- Provides recovery service.
- Lambda integrates effectively with other services
AWS Lambda Best Practices
High Availability
When we run a Lambda function, it runs on a VPC by default, which has internet access on S3 and Dynamodb AWS Services. But, it won’t have access to any other private VPC or any other AWS resources (RDS, Elasticsearch, etc..) since they would be running under another VPC. If a function runs on a Lambda-managed VPC, Lambda takes care of its availability running on multiple AZs of that VPC region.
In most enterprise applications of Lambda, we would need access to RDS and other VPC resources. In those cases, make sure the following things are applied:
- Design Lambda for high availability by selecting multiple subnets in different AZs.
- Select multiple subnets to ensure the HA (allocate sufficient IP addresses in a subnet for HA)
If an Availability Zone (AZ) goes down, other AZs need to have sufficient IP addresses allocated to handle concurrent Lambda requests. Keep in mind that Lambda execution will require one private IP address to handle the request.
Concurrency Execution Limits
Lambda does have concurrency execution limits, even though handles the scalability on its own.
- Account level — By default, it is 1000 per region across all the functions.
- Function level — By default, it will use the “Unreserved Account Concurrency limit” but that is not a good practice. This causes throttling of other functions in case one exhausts all the account level concurrency limit.
To avoid that, always reserve the concurrency limit for each function so the impact is isolated to only that function if the number of events surges for any reason.
AWS will always keep an unreserved concurrency pool with a minimum of 100 concurrent execution to process the requests of the functions which didn’t set any specific limit. So you would be able to allocate up to 900 only.
While running AWS Lambda in a dedicated VPC make sure to have sufficient IP addresses based on the ENI scalability requirement of the function. Estimate the approximate ENI capacity with the following formula:
- Concurrent executions * (Memory in GB / 3 GB)
- Concurrent execution is the projected concurrency of the workload. (invocations per second * average execution duration in seconds)
- Memory in GB is the amount of memory configured for the Lambda function.
Throttling in AWS Lambda
Any surge of the events for a function can cause throttling if it exceeds the concurrency limit – this means it will not process any new requests.
There are two possible scenarios while handling throttling in Lambda,
- Synchronous Lambda invocation – it will start receiving a 429 error code. It will also receive information if the throttle is at the Function level or Account level. Based on that invoking service (e.g. API Gateway) needs to handle the retry.
- Asynchronous Lambda invocation – lambda will try twice before it discards the event. If a function is not able to process the event, we should define DLQ(Dead Letter Queue) using SQS or SNS to debug and process it later. Make sure to define DLQ, to save all the messages or else it will be lost and discarded.
- Poll-based Lambda invocation –
- Stream-based (Kinesis) – it will keep retrying until the time data expires (up to 7 days).
- Non-stream based (SQS), it will put the message back to the queue and retry only after the Visibility timeout period expires. Lambda will keep doing it until it successfully processes or retention period expires.
Tips to improve AWS Lambda Performance
- Choose interpreted languages like Nodejs, Python compares to Java, C++ to reduce the cold start time. ( While invoking Lambda for the first time, it downloads the code from S3, and all the dependencies, create a container and start the application before it executes the code. This whole duration excluding the time for execution of code is known as a cold start time. )
- Go for Java Spring Cloud Functions rather than the Spring Boot web framework.
- Use the default network environment unless you need a VPC resource with a private IP. Because setting up ENI is time-consuming and adds to the cold start time.
- Keep only the necessary dependencies that are required at runtime.
- Use Global/Static variables, Singleton objects – these remain alive until the container goes down. So any subsequent call does not need to reinitialize these variables/objects.
- Define your database connections at a global level so that it can be reused for subsequent invocation.
- If you are using Java, use simpler IoC dependency injections like Dagger and Guice rather than a Spring framework.
- If you are using Java, separate your dependency .jar file from the function’s code to speed up the unpacking package process.
- If you are using Nodejs, try to make your Function js file size less than 600 characters and use V8 runtime. V8 optimizer inline the function whose body size is less than 600 characters (including comments).
- If you are using Nodejs, you can use minification and/or uglification of the code to reduce the size of the package and that reduces the time to download the package significantly. In some cases, I have seen the package size reduced from 10MB to 1MB as well.
- Minification in Nodejs – it removes all the spaces/newline characters, comments.
- Uglification in Nodejs – it takes all the variables and obfuscates/simplifies them.
Improve AWS Lambda Security
- Make sure that only one IAM role should be mapped with one function even though multiple functions might need the same IAM policies. This ensures the least privilege policies when any enhancement happens for the security policies for the specific function.
- As Lambda would be running on shared VPC, it is not good practice to keep the AWS credential in code. IAM execution role is sufficient to connect to AWS services by just using the AWS SDK.
- When a function needs to call the cross-account services, it might need the credentials. In those scenarios, use ‘Assume Role API’ within AWS Security Token Service and retrieve temporary credentials.
- When a function needs long-lived credentials to be stored like DB credentials, access key, either use environment variables with encryption helper or AWS System Manager.
AWS Lambda is highly flexible and there are even more use case scenarios and best practices that we could discuss more in the future. The performance of Lambda could be improved depending upon the applications.
Perfomatix | Cloud Software Development
We are Perfomatix, one of the top Cloud Software Development companies. We provide cloud app development services in building highly scalable cloud applications.
Do you want to upgrade your business to the cloud?
Talk to our experts now and let us be assist you!