Run Kubernetes workloads on demand and reduce your cloud costs.

Scale your Kubernetes workloads to zero by default. Reduce your cloud cost on cloud providers. Why pay for non-utilised compute?

Reduce your Kubernetes cluster costs with kscaler.

Features

content_paste_search
content_paste_search

Start workloads on-demand

Why run workloads if they are not being utilised?
access_alarms
access_alarms

Scheduling out of hours

Being able to start workloads on-demand, means no out of hours scaling down. Globally distributed teams can run their workloads anytime and across team boundaries.
attach_money
attach_money

Cost reduction

Reduce your overall cloud cost
share
share

Start dependent applications

Scale from 0 all microservices and their dependencies upon first request.
lock
lock

Reduce licensing costs

Per CPU licensing costs are reduced; The compute is scaled down and only running the active workloads.
archive
archive

Annotate workloads

Mark workloads to be exempt from the scaling policy
  • Cost optimisation
  • Scale to zero
  • Artificial Intelligence

Reduce costs on environments

  1. Directly reduce your compute costs on Kubernetes by not running the workloads in the first place.
  2. Distributed teams can start workloads on-demand regardless of timezones and isolation between teams.
  3. Engineers need not worry about downscaling and turning compute resources off.
  4. Reduce operational over heads.

Scale to zero and start workloads on-demand

Non-production environments are often under-utilised, not in constant use and waste resources.

Predict workloads

Predict when dependent workloads are required and start automatically.

---

attach_money
0%

reduction in operational costs for non-production.

Have a question?

  • How much can I save?
    1. For non-production environments the cost can be significant and often can be larger than your production estate.
    2. Engineers will start workloads and not shut them down or clean up the environments. Overtime this will create uncertainty as to what is required.
    3. Environments observed can be up to 91% unused and/or under utilised.
    4. Licensing costs of applications are often per cpu/node. With workloads turned off, the cost is removed and it can be renegotiated with the vendor.
  • How can you scale workloads to zero

    kscaler is implemented with a service mesh, intercepting requests and blocking the request from completing until the workload has been started.
    Both north-south (from an ingress) and east-west traffic are supported.

  • How can I prevent a namespace/deployment/pod from being shutdown?

    Annotations are used to adjust the behaviour of a controller (a deployment) that handles shutting workloads down

  • Why is this performant/efficient.

    It is implemented as a WASM plugin to envoy. The request will only be handled if the calling application has no healthy upstream.
    A controller handles scaling down and up workloads.

Seamless integration

kscaler allows seamless integration with kubernetes and istio.

Ready to get started?

Free 30 days trial | Exclusive Support | No Fees