Explore all the new features in Kubernetes 1.30
Kubernetes 1.30, nicknamed "Uwubernetes," marks the first release of 2024. This release brings 45 enhancements, including 10 new or improved Alpha features, 18 Beta features enabled by default, and 17 features graduating to Stable. The name "Uwubernetes" combines Kubernetes with "Uwu," an emoticon representing happiness and cuteness, celebrating the global community that builds and maintains Kubernetes.
In this article, you’ll find some major changes in v1.30 of Kubernetes. Let’s discuss:
1. Container Resource-Based Pod Autoscaling (Stable) #1610
Feature-group: sig-autoscaling
Horizontal Pod Autoscaling (HPA) in Kubernetes allows automatic scaling of the number of pods in a deployment, replication controller, or replica set based on observed CPU utilization or other select metrics.
Previously, HPA only considered the overall resource usage of all containers in a pod combined, which could lead to inaccurate scaling decisions in multi-container pods.
Kubernetes 1.30 stabilizes the container resource type metric for HPA, allowing for more granular and accurate autoscaling based on individual container resource usage within pods. This feature is particularly useful for pods with sidecar containers or microservices architectures where different containers within a pod have varying resource consumption patterns.
type: ContainerResource
containerResource:
name: cpu
container: main-app
target:
type: Utilization
averageUtilization: 70
The above yaml file scales the deployment based only on the CPU utilization of the 'main-app' container, targeting an average utilization of 70%.
2. Structured Parameters for Dynamic Resource Allocation (Alpha) #4381
Feature-group: sig-node
Dynamic Resource Allocation (DRA) in Kubernetes allows for a more flexible and efficient allocation of specialized hardware resources, such as GPUs or FPGAs. Previously, the lack of structured parameters made it challenging for the scheduler to make decisions about resource availability and allocation.
Kubernetes 1.30 introduces structured parameters for DRA, addressing limitations in the original alpha implementation from v1.26. This feature allows third-party DRA drivers to describe resources using pre-defined "structured models," enabling components like the scheduler to make informed decisions without relying on opaque parameters. The new framework supports various models, with the "named resources" model implemented in this release. This improvement allows for better allocation of specialized hardware resources like GPUs.
apiVersion: dra.k8s.io/v1beta1
kind: ResourceClaim
metadata:
name: gpu-claim
spec:
resourceType: "gpu"
resourceParameters:
attributes:
model: "nvidia-tesla-v100"
capacity: "16GB
This configuration shows a resource claim for specific GPU resources.
3. Node Memory Swap Support (Beta) #2400
Feature-group: sig-node
Swap memory is a portion of hard drive space used as an extension of RAM when physical memory is full. In previous Kubernetes setups, swap was disabled by default due to concerns about unpredictable performance issues.
Kubernetes 1.30 improves swap memory management on Linux nodes, introducing a "limited swap" mode. This mode allows pods to use a controlled amount of swap space, up to their defined memory limits, providing better overall memory utilization. To enable this feature, users need to configure the kubelet appropriately.
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
NodeSwap: true
memorySwap:
swapBehavior: LimitedSwap
4. Structured Authorization Configuration (Beta) #3221
Feature-group: sig-auth
Authorization in Kubernetes involves determining whether an authenticated entity has permission to perform specific actions on resources within the cluster. Previously, configuring authorization policies required specifying command-line flags for the API server.
Kubernetes 1.30 simplifies authorization configuration through a structured file format. It allows the creation of authorization chains with multiple webhooks, explicit deny policies and CEL pre-filter rules. The API server can now automatically reload the authorizer chain when the configuration file changes.
Example of authorization-config.yaml:
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthorizationConfiguration
authorizers:
- name: webhook
webhook:
name: my-webhook
timeout: 3s
failurePolicy: Deny
subjectAccessReviewVersion: v1
matchConditions:
- expression: request.resourceAttributes.namespace == 'kube-system'
To use this configuration, you would run the API server with:
--authorization-config=/path/to/authorization-config.yaml
5. User Namespaces in Pods (Beta) #127
Feature-group: sig-node
User namespaces in Linux provide isolation for user and group IDs between the host and containers. This feature enhances security by preventing potential UID/GID conflicts between the host and containerized processes.
Kubernetes 1.30 moves user namespace support to Beta, offering more detailed control over UID and GID ranges for pods. This improvement helps prevent overlapping IDs between the host and pods, reducing security risks.
apiVersion: v1
kind: Pod
metadata:
name: user-pod
spec:
securityContext:
runAsUser: 100000
runAsGroup: 100000
containers:
- name: container
image: nginx
This configuration runs the pod with a specific UID and GID, isolated from the host's ID range.
6. SELinux Label Optimization (Alpha) #1710
Feature-group: sig-storage, sig-node
Security-Enhanced Linux (SELinux) is a security architecture that provides fine-grained access control for Linux systems. In Kubernetes, SELinux can enhance pod security by ensuring compliance with specified security policies.
Kubernetes 1.30 introduces an alpha feature for optimizing SELinux label changes, potentially improving performance for operations involving SELinux contexts.It expands on the SELinux optimization introduced in v1.27. While the previous version optimized SELinux labeling for ReadWriteOncePod volumes (controlled by the SELinuxMountReadWriteOncePod feature gate), v1.30 extends this support to all volume types as an alpha feature. This enhancement is particularly beneficial for volumes with numerous files. However, it introduces behavioral changes when multiple pods with different SELinux labels share a volume. To know more about this, read this KEP
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
seLinuxOptions:
level: s0:c123,c456
containers:
- name: nginx
image: nginx
securityContext:
seLinuxOptions:
level: s0:c123,c456
volumeMounts:
- mountPath: "/data"
name: secure-vol
volumes:
- name: secure-vol
persistentVolumeClaim:
claimName: secure-pvc
This configuration applies specific SELinux settings for enhanced security to the pod and container.
7. Job Success/Completion Policy (Alpha) #3998
Feature-group: sig-apps
Kubernetes Jobs are used for running batch processes or one-off tasks. Previously, a Job was considered complete only when all its pods succeeded, which wasn't always efficient for workflows such as leader-worker patterns or simulations with different parameters.
Kubernetes 1.30 introduces an alpha feature allowing custom success criteria for Jobs using the .spec.successPolicy field, particularly useful for indexed Jobs where only specific "leader" pods need to succeed for the overall Job to be considered complete. The policy is defined by rules that can specify succeededIndexes, succeededCount, or both. Once a Job meets the defined success policy, the job controller marks it as succeeded and terminates any lingering pods.
apiVersion: batch/v1
kind: Job
metadata:
name: simulation-job
spec:
completions: 5
parallelism: 5
completionMode: Indexed
successPolicy:
rules:
- succeededIndexes: "0,2-4"
succeededCount: 3
template:
spec:
containers:
- name: simulator
image: simulator:v1
command: ["run-simulation"]
restartPolicy: Never
This Job will be considered successful when at least 3 pods from the indexes 0, 2, 3, and 4 are completed successfully.
8. Add Interactive flag for kubectl delete Command (Stable) #3896
Feature-group: sig-cli
The kubectl delete command is a powerful but dangerous command in Kubernetes, capable of irreversibly removing resources. Accidental deletions can lead to significant operational issues.
Kubernetes 1.30 stabilizes the interactive mode for kubectl delete, adding an extra layer of safety by requiring user confirmation before executing deletions.
kubectl delete pod mypod -n mynamespace --interactive
This command will ask for confirmation before deleting the specified pod.
9. Routing Preferences for Services (Alpha) #4444
Feature-group: sig-network
In multi-zone Kubernetes clusters, efficient traffic routing can significantly impact application performance and network costs. Previously, Kubernetes lacked native support for zone-aware routing preferences.
Kubernetes 1.30 introduces an alpha feature for specifying routing preferences in Services with spec.trafficDistribution field, allowing for more efficient traffic distribution, particularly in multi-zone environments. By enabling the ServiceTrafficDistribution feature gate, users can leverage the PreferClose option, which directs traffic to endpoints topologically close to the client.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
trafficDistribution:
PreferClose: true
This configuration instructs the service to prefer routing traffic to endpoints in the same zone as the client when possible.
We've discussed the major changes in Kubernetes 1.30, but there are a total of 45 KEPs (Kubernetes Enhancement Proposals), these enhancements cover various aspects of Kubernetes, including storage, networking, security, and more. We encourage you to explore the Kubernetes v1.30 release.Looking to adopt Kubernetes or optimize your existing infrastructure? Reach out to us at KubeNine, and we can help you navigate the complexities of Kubernetes and leverage its latest features.