Stay up to date
Kubernetes version 1.23 is out – everything you should know

Kubernetes version 1.23 is out – everything you should know

Dec 1, 2021

Amir Kaushansky
VP Product

Kubernetes’ last release for the year v1..23 will be released next week Tuesday, December 7, 2021

The Christmas edition of Kubernetes comes with 45 new enhancements to make it more mature, secure, and scalable. In this blog, we’ll focus on the critical changes grouped into the Kubernetes API, containers and infrastructure, storage, networking, and security.

Let’s start with the “face of Kubernetes”, which makes it scalable and expandable.

Kubernetes API

There are three significant changes from api-machinery, CLI, and autoscaling SIGs that will be released as part of 1.23

The Kubectl Event Command

Using kubectl get events makes it easier to watch the cluster’s overall state and solve problems. However, it’s limited by the options and data collection approach of the kubectl get command. That’s why there’s a new command being released as an alpha feature in 1.23: kubectl event.

The new command will be beneficial for:

  • Viewing all events related to a particular resource
  • Watching for specific events in the cluster
  • Filtering events by their status or type in a specific namespace

Until feature graduation, you can check the design document for the upcoming features in subsequent releases. Thankfully, you can start using the kubectl events command just after installing the new kubectl version.

Graduating HPA API to General Availability

Horizontal Pod Autoscaler (HPA) is the central component of Kubernetes that automatically scales the number of pods based on metrics. HPA can scale up or down many resources, such as replica sets, deployments, or stateful sets with well-known metrics like CPU utilization. It has been part of the Kubernetes API since 2015, and it’s finally graduating to general availability (GA).

If you’re already using HPA in your clients and controllers, you can start using v2 instead of v2beta1. This graduation also means that you can use HPA on the long run, since it’s production-ready and now a core component of the Kubernetes API.

CRD Validation Expression Language

CustomResourceDefinition (CRD) is the robust abstraction layer that extends Kubernetes and makes it work with all possible custom-defined resources. Because users define the new custom resources and their specifications, the validation could be tricky with webhooks, controllers, and client tools.

Thankfully, it is proposed using an inline expression language, such as Common Expression Language, and integrated into CRD for validation.

With the 1.23 release, validation rules are provided as an alpha feature so that you can add x-kubernetes-validation-rules, similar to the following example from the Kubernetes Documentation:



      type: object



          type: object


            – rule: “self.minReplicas <= self.replicas”

              message: “replicas should be greater than or equal to minReplicas.”

            – rule: “self.replicas <= self.maxReplicas”

              message: “replicas should be smaller than or equal to maxReplicas.”




              type: integer


              type: integer


              type: integer


            – minReplicas

            – replicas

            – maxReplicas 

Let’s assume you want to create the following custom resource instance where it violates the second rule:

apiVersion: “”

kind: CronTab


  name: my-new-cron-object


  minReplicas: 0

  replicas: 20

  maxReplicas: 10

Kubernetes API will respond with the following error message: 

The CronTab “my-new-cron-object” is invalid:

* spec: Invalid value: map[string]interface {}{“maxReplicas”:10, “minReplicas”:0, “replicas”:20}: replicas should be smaller than or equal to maxReplicas.

If you are using CRDs in your cluster, you must also use validation mechanisms in your Open API schema and your controllers. With this new release, you can start migrating them to x-kubernetes-validation-rules—and let the Kubernetes API do the cumbersome work for you.

Containers and Infrastructure

In this release, we have found two features from Windows and node SIGs to be noteworthy. Those are Ephemeral Containers and Windows privileged containers.

Ephemeral Containers

Ephemeral Containers are temporary containers designed to observe the state of other pods, troubleshooting, and debugging. This new feature also comes with a CLI command to make troubleshooting easier: kubectl debug. The new command runs a container in a pod, whereas the kubectl exec command runs a process in the container.

With v1.23, you’ll be able to add Ephemeral Containers as part of pod specification under PodSpec.EphemeralContainer. They are similar to a container specification, but they do not have resource requests or ports because they’re intended to be temporary additions to the pods. For instance, you’ll be able to add a debian container for the  my-service pod and connect interactively for live debugging, as shown below:

$ kubectl debug -it -m debian my-service — bash

root@debug:~# ps x


    1 ?        Ss     0:00 /pause

   11 ?        Ss     0:00 bash

  127 ?        R+     0:00 ps x

Ephemeral Containers were already in the alpha state in v1.22, and they’ll graduate to beta in the 1.23 release. If you haven’t tried it yet, it’s good to create your debugging container images and start including the kubectl debug command in your toolbox.

Windows Privileged Containers and Host Networking Mode

Privileged containers are potent container instances as they can reach and use host resources—similar to a process that runs directly on the host. Although they pose a security threat, they’re beneficial for managing the host instances and are used heavily in Linux containers.

With the 1.23 release, privileged containers and the host networking mode for Windows instances will graduate to beta. If you have Windows nodes in your cluster, or plan to include them in the future, review the design document for capabilities and GA plan.


There is one essential change that we want to emphasize for v1.23 from the storage SIG: volume ownership change during volume mounts.

Currently, before volume binding, volume permissions are recursively updated to the fsGroup value in the pod specification. When the volume size is large, changing ownership could lead to excessive wait times during pod creation. Therefore, a new field, pod.Spec.SecurityContext.FSGroupChangePolicy, has been added to allow users to specify how permission and ownership changes should operate.

In v1.23, this feature has graduated to GA and you can specify the policy with the two following options:

  • Always: Always change the permissions and ownerships to match the fsGroup field.
  • OnRootMismatch: Only change the permissions and ownerships if the top-level directory does not match the fsGroup field.

If you’re using applications sensitive to permission changes, such as databases, you should check the new field and include it in your pod specifications in order to avoid excessive wait times in pod creation.


IPv6 is a long-awaited feature from the Kubernetes team, especially since it was added as an alpha feature in Kubernetes v1.9. In the latest release, dual-stack IPv4/IPv6 networking has finally graduated to general availability.

This feature consists of awareness of multiple IPv4/IPv6 addresses for pods and services, and it also supports the native IPv4-to-IPv4 communication in parallel with IPv6-to-IPv6 communication to, from, and within clusters.

Although Kubernetes provides dual-stack networking, you may be limited by the capabilities of the underlying infrastructure and your cloud provider. This is due to the fact that nodes should have routable IPv4/IPv6 network interfaces, and pods should have dual-stack networking attached. Thus, you also need a network plugin that is aware of the dual-stack networking to assign IPs to pods and services.

Some CNI plugins already support dual-stack networking, such as kubenet, and ecosystem support is on the rise with kubeadm and kind.


There is one essential enhancement from the v1.23 release auth SIG that we’d like to note: graduation of Pod Security Standards to beta.

In the previous release, Pod Security Standards were provided as an alpha feature to replace the PodSecurityPolicy. They created a way to limit pod permissions with the help of namespaces and labels, and to implement policy enforcement—which we wrote about in our blog post for the v1.22 release.

Now that the feature has graduated to beta, it’s a good time to include it in your deployments for greater security in your pods and clusters.


With the last release of 2021, Kubernetes comes with more scalable and reliable API and infrastructure enhancements. Furthermore, the improvements in storage, networking, and security make Kubernetes faster and future-proof, establishing it as the leading container orchestration platform in the industry.

To learn more about the latest enhancements, reference and the Kubernetes blog and release notes.

Do you know if your Kubernetes clusters, YAML files and HELM charts are positive or negative to misconfigurations and vulnerabilities? Use Kubescape and get results in seconds  

To learn about the new version of Kubernetes v1.25, you can read here


Continue to Slack

Get the information you need directly from our experts!

new-messageContinue as a guest