Source: OpenFaaS Blog

OpenFaaS Blog How to scale OpenFaaS Functions with Custom Metrics

In the first part of our mini-series on autoscaling, we looked at how autoscaling has evolved in OpenFaaS going all the way back to 2017, and how today you can use Requests Per Second, Capacity (inflight requests), and CPU utilization to scale functions horizontally. Today, we are going to show you what you can do with custom Prometheus metrics, for when you need to scale based upon work being done by your function’s Pod, or some other metric that you emit to Prometheus like the number of pending requests, latency, or the number of items in a queue or event-stream. Why do we need custom metrics? Customers often ask: what is the right scaling approach for our functions? This presupposes that there is only one ideal, optimal way to scale every kind of workload you’ll ever have in your system. Suppose you target 99.5% latency of requests. If you have a stateless function, which has no external dependencies, then a fair assumption would be that additional replicas would decrease latency during congestion. However if that function is dependent on a database, or a remote API, then adding new replicas may even increase, not decrease latency. One approach such as RPS or Capacity may yield good results with many types of functions, but you cannot beat observing your functions in production, and tuning them according to your needs - whether that’s the type of scaling, the minimum/maximum replica count, or adding a readiness check. Look out for how latency is affected during peak times, which HTTP status codes you receive, and don’t rule out invoking functions asynchronously, to defer and buffer the work. To learn more about the built-in scaling options, see: Docs: Auto-scaling your functions Overview - How scaling on custom metrics works With the current design of the OpenFaaS Autoscaler, you need to do three things: Find an existing Prometheus metric, or emit a new one Configure the Recording Rules in the configuration OpenFaaS sets for Prometheus to emit a new scaling type Set that scaling type on your function using the com.openfaas.scale.type label, just like with the built-in metrics 1. Pick a metric You have five options for metrics: Use one of the built-in metrics from various OpenFaaS components ranging from the Gateway, to the queue-worker, to the Kafka connector, etc. These are documented here: Monitoring Functions Use the CPU or RAM metrics already scraped from each node in the cluster i.e. pod_cpu_usage_seconds_total or pod_memory_working_set_bytes Use one of the built-in metrics emitted by the OpenFaaS watchdog - these include things like inflight requests, number of requests and latency Emit a new metric from your function’s handler. In this case, you’ll import the Prometheus SDK for your language of choice such as Python or Go, and register a metric endpoint, and add a couple of annotations so Prometheus knows to scrape it The final option is to have your own control plane emit a new metric centrally, this is where you may be able to expose a queue depth, the number of pending requests, or work with some kind of internal business metric like a Service Level Objective (SLO) or Key Performance Indicator (KPI) 2. Set up a new recording rule in Prometheus The autoscaler uses a recording rule named job:function_current_load:sum to understand the total load for a given function. It then makes a simple calculation where the total sum is divided by the target figure for a function to determine the ideal amount of Pods. Here is the rule we are using for CPU based scaling: - record: job:function_current_load:sum expr: | ceil(sum(irate ( pod_cpu_usage_seconds_total{}[1m])*1000) by (function_name) * on (function_name) avg by (function_name) (gateway_service_target_load{scaling_type="cpu"} > bool 1 )) labels: scaling_type: cpu The expr field shows the Prometheus query that will be evaluated, and the labels show the scaling type for which the data will be emitted. The first half ceil(sum(irate ( pod_cpu_usage_seconds_total{}[1m])*1000) by (function_name) is the basic query which resembles what you’ll see on kubectl top pod -n openfaas-fn. The second half: * on (function_name) avg by (function_name) (gateway_service_target_load{scaling_type="cpu"} > bool 1 is an optimisation which means this rule is only evaluated when a function has a label set of com.openfaas.scaling.type: cpu. You can set recording rules permanently by editing values.yaml, or insert them temporarily by running kubectl edit -n openfaas configmap/prometheus-config followed by kubectl rollout restart -n openfaas deploy/prometheus. Bear in mind that if you’re port-forwarding Prometheus to your local machine, you will need to terminate the command and to run it again after the new version of the Pod is available. 3. Set the scaling type on your function Here is a redacted example of how a function can target the cpu recording rule: functions: bcrypt: labels: com.openfaas.scale.min: "1" com.openfaas.scale.max: "10" com.openfaas.scale.target: "500" com.openfaas.scale.proportion: "0.9" com.openfaas.scale.type: "cpu" The minimum and maximum number of replicas are configured, followed by the target amount per Pod, in this case it’s “500Mi” or half a vCPU. The proportion is used to tune how close to the target the function should be before scaling. Finally, the com.openfaas.scale.type is set to cpu. We’ll now look at some concrete examples of custom rules. Example 1: Scaling on the request latency In this example we’ll scale based upon the average request latency from the bcrypt function. The rough target we’ve seen is 6ms or 0.06s, so we’ll set that as a target, and if the latency goes up that figure, additional Pods will be added. Here’s what we’ll put in our function’s configuration, we can omit com.openfaas.scale.min and com.openfaas.scale.max as they already have defaults. functions: bcrypt: labels: com.openfaas.scale.target: "0.06" com.openfaas.scale.proportion: "0.9" com.openfaas.scale.type: "latency" Next let’s add a recording rule, and use the latency from a metric emitted by the gateway: - record: job:function_current_load:sum expr: | sum by (function_name) (rate(gateway_functions_seconds_sum{}[30s])) / sum by (function_name) (rate( gateway_functions_seconds_count{}[30s])) and on (function_name) avg by(function_name) (gateway_service_target_load{scaling_type="latency"}) > bool 1 labels: scaling_type: latency Contrasting latency with RPS We adapted the e2e tests for the openfaas autoscaler to generate load on the bcrypt function using the ramp test which goes up to 150RPS over 2 minutes then sustains it for a further two minutes. The results are from a single CPU, single node Kubernetes cluster with Pod vCPU limited to 1000Mi, or around 1x vCPU per Pod. With a maximum of 10 Pods, this is similar to 10 vCPUs generating bcrypt hashes at once. The initial latency starts off around the best case for this processor near 0.05-0.15ms per hash, then the thundering herd causes congestion. The system starts to add additional replicas, and the latency stabilises itself for the full test. We also set up a stable scale down window of 2m30s in order to prevent any variable latency from causing the function to scale down too quickly. We’d typically suggest something CPU-bound like bcrypt is scaled with the CPU approach, however we tried the same test with RPS, with a target of 10 RPS per pod. When you compare the two “Replicas per function” graphs, you can see that the custom latency based approach is more responsive and gives a more stable result in the “Average duration by status” graph. Example 2: Scaling based upon a built-in watchdog metric The watchdog itself emits several metrics which can be used for scaling, most of them are already available in aggregate from the gateway, but we wanted to show you this option. Here’s how you can scale based upon the number of inflight requests: sleep: image: ghcr.io/openfaas/sleep:latest skip_build: true environment: write_timeout: 60s labels: com.openfaas.scale.min: 1 com.openfaas.scale.max: 10 com.openfaas.scale.target: 5 com.openfaas.scale.type: inflight annotations: prometheus.io.scrape: true prometheus.io.path: "/metrics" prometheus.io.port: "8081" In this case we’ve used the type of inflight, and added three extra annotations: prometheus.io.scrape: true - this tells Prometheus to scrape the metrics from the Pod directly prometheus.io.path: "/metrics" - this is the path where the metrics are exposed prometheus.io.port: "8081" - this is the port where the metrics are exposed, in this case it’s not actually the default of 8080, which is the proxy to access the function, but 8081, a separate HTTP server that only exposes Prometheus metrics. For the recording rule: - record: job:function_current_load:sum expr: | ceil(sum by (function_name) ( max_over_time( http_requests_in_flight[45s:5s])) and on (function_name) avg by(function_name) (gateway_service_target_load{scaling_type="inflight"}) > bool 1) labels: scaling_type: inflight This rule is very similar to the built-in capacity scaling mode, however the data is coming directly from function Pods instead of being measured at the gateway. Example 3: Scaling on a metric emitted by the function If you include the Prometheus SDK in your function, then you can emit metrics quite simply. We’ve written an example for a Go function which scales based upon the number of items it receives in a JSON payload. You could imagine this may be a function connected to AWS SNS, which variable-sized batches are sent depending on congestion. The below is a fictitious, but realistic example of the payload the function could receive from SNS: { "items": [ { "arn"

Read full article »
Est. Annual Revenue
$100K-5.0M
Est. Employees
1-25
Alex Ellis's photo - Founder of OpenFaaS

Founder

Alex Ellis

CEO Approval Rating

90/100