What is new in Storage for Kubernetes 1.9
With the Holiday seasons upon us, many in the Kubernetes community have been hard at work preparing the delivery of one of the most significant set of changes brought to the Kubernetes storage subsystem. These changes are being released in alpha with some introducing minor updates while others are designed to fundamentally change the way storage is deployed within Kubernetes. This writeup highlights three of the major changes that have been released in version 1.9 of Kubernetes.
Raw Block Support
There are classes of applications, such as databases, that require direct access to raw block storage without the abstraction of a filesystem for peak performance and consistent IO. Starting with version 1.9, an alpha implementation will allow administrators and users to use existing Kubernetes storage semantics to specify and deploy raw block storage for consumption across a Kubernetes cluster. This feature has been implemented to be backward compatible with existing storage configurations while allowing support for PersistentVolume (PV), PersistentVolumeClaim (PVC), StorageClass, and Pods to provision block devices that can consume directly as raw block storage or alternatively with a filesystem.
Learn more about this feature here.
Additional Plugins with Volume Resize Capabilities
This feature has landed in version 1.9 as an alpha implementation. It is designed to allow an existing bound persistent volume to be resized after it has been provisioned. Resizing a volume has many use cases specifically when additional storage space is needed for a given workload. An administrator can simply edit an existing PVC and update the size of its storage claim. This release introduces several existing volume drivers which were updated to support volume resizes including Ceph and Cinder with additional drivers to show up in future releases.
Learn more about this feature here.
Local Storage with Topology-aware Volume Scheduling
Support for using local devices for persistent storage was introduced back in version 1.7. However, the Kubernetes scheduler did not use storage as a first class resource when determining the capability of a node for pod scheduling. Version 1.9 introduces topology-aware volume scheduling. This means a pod can influence where it is scheduled based on availability of local storage of a given node, rack locality, its geographical region, and so on. During pod scheduling, these volume constraints, along with other resource requests (such as memory, CPU, etc) and affinity rules, are taken into account to land the pod on the proper node if available. This refinement of local storage will make it possible to deploy new classes of applications continuing to make Kubernetes a great platform for stateful workloads.
Learn more about this feature here.
Container Storage Interface (CSI)
After months of discussing the Container Storage Interface (CSI) (here, here, and here), it has finally arrived! Version 1.9 of Kubernetes will bring with it an alpha version of CSI which was implemented in record time (given the Holiday season, this release period is shorter than others in the year). This herculean effort was made possible, partly, by the use of gRPC and its interface definition language as the lingua franca for specifying the API that makes up CSI. All new storage drivers must use CSI as a way to implement storage services for Kubernetes. Storage providers are able to develop their own storage drivers outside of the Kubernetes codebase, creating opportunities for fast-paced innovative storage solutions that have higher release frequencies than Kubernetes core.
Learn more about this feature here.
ScaleIO Updates
The {code} team contributed the native ScaleIO in-tree driver back in March, and has continually made slight improvements on each release. The Kubernetes 1.9 release (and back ported to 1.8.4) introduces the ability to decouple the base-64 encoded Secret for ScaleIO credentials into its own namespace while being referenced by StorageClass/PVC/PV in a default or different namespace. This prevents admins from having unauthorized access to ScaleIO secret information. Learn about this feature here. In addition, the kubelet can now be easily containerized since the dependency of having the
drv_cfg
binary for querying ScaleIO has been removed. This allows ScaleIO to introduce itself into more K8s distros that embrace complete containerization.
Learn about this feature here.
In Summary
Storage is really heating up in the Kubernetes space and the {code} team is excited to be a part of it. Plan on seeing a lot of momentum behind CSI in 2018 and future blog posts including one next week documenting how to turn on the CSI alpha gate and get started using CSI plugins!
留言
張貼留言