Serverless in the wild: Characterizing and optimizing the serverless workload
#serverless #Function_as_a_Service #FaaS #trace_analysis #reduce_code_start_invocations
Last updated
Was this helpful?
#serverless #Function_as_a_Service #FaaS #trace_analysis #reduce_code_start_invocations
Last updated
Was this helpful?
Serverless in the wild: Characterizing and optimizing the serverless workload at a large cloud provider
Presented in .
Trace:
This paper characterizes the entire production FaaS workload of Azure Functions. It also proposes a policy for reducing the number of cold start function executions.
Function as a Service (FaaS): A software paradigm
Users simply upload the code of their functions to the cloud.
Functions get executed when “triggered” or “invoked” by events.
The cloud provider is responsible for provisioning the needed resources, providing high function performance, and billing users for their actual function executions.
Function execution: Require the code (e.g., user code, language runtime libraries) in memory.
Warm start: The code is already in memory, so the function can be started quickly.
Cold start: The code has to be brought in from persistent storage.
Most functions are invoked very infrequently.
The most popular functions are invoked 8 orders of magnitude more frequently than the least popular ones.
Functions exhibit a variety of triggers, producing invocation patterns that are often difficult to predict.
A 4x range of function memory usage and that 50% of functions run in less than 1 second.
AWS and Azure use a fixed “keep-alive” policy that retains the resources in memory for 10 and 20 minutes after a function execution, respectively.
To reduce the number of cold start invocations.
Use a different keep-alive value for each user’s workload, according to its actual invocation frequency and pattern.
Enable the provider to pre-warm a function execution before its invocation happens (making it a warm start).
Implemented in simulation and for Apache OpenWhisk FaaS platform.