JD Wallace

Dec 13, 20203 min

Exploring Kubernetes Persistent Storage with the Pure Service Orchestrator CSI Plugin

In this post I'll install the Pure Service Orchestrator (PSO) Kubernetes CSI Plugin and then use it to deploy an app with persistent storage on FlashArray. If you don't yet have a lab deployed or want details about the lab I'm using, see my post Kubernetes Cluster Setup with Kubeadm. Of course, you'll also need access to a FlashArray.

Also, I want to be sure to acknowledge the excellent work Chris Crow (Pure Storage Systems Engineer) has done. Chris has been leading a community meet-up for us all to get better acquainted with Kubernetes, PSO, and Portworks and this post comes directly from my study of a lecture he original developed. Thanks for your leadership Chris!

Add the Pure Storage PSO helm repository to your helm installation.

helm repo add pure https://purestorage.github.io/pso-csi
 
helm repo update
 
helm search repo pure-pso -l

Create a values.yaml file (here is a sample) to let PSO know how to connect to your FlashArray. The minimum values are:

clusterID - A unique name to represent your deployment. Multiple clusters may deployed targeting the same Pure storage, but each must have a unique clusterID.

MgmtEndPoint - The management IP of your FlashArray.

APIToken - An API Token for a user with at least Storage Admin rights on the FlashArray. This account will be used by PSO to orchestrate the FlashArray.

PSO may be configured to work with multiple FlashArrays as well as FlashBlades, however for this post I'll be using just a single FlashArray.

clusterID: homelab
 
arrays:
 
FlashArrays:
 
- MgmtEndPoint: "192.168.1.245"
 
APIToken: "f5ba6d62-63c1-3a3a-f9d2-642de1124fef"

Create a new namespace for the PSO deployment.

kubectl create namespace pso

Install Pure PSO.

helm install pure-pso pure/pure-pso --namespace pso \
 
-f PSOvalues.yaml

Expect output that looks something like this:

NAME: pure-pso
 
LAST DEPLOYED: Fri Dec 11 19:17:48 2020
 
NAMESPACE: pso
 
STATUS: deployed
 
REVISION: 1
 
TEST SUITE: None

Eventually, these new pods will be running.

kubectl get pods -n pso

With PSO installed we should now have two new storage classes.

kubectl get storageclass

Using MinIO to Explore PSO

MinIO offers a software defined object storage application that can be deployed as a Docker image. This makes it well suited to explore persistent storage in a containerized environment. I'll deploy a MinIO container that uses PSO to leverage FlashArray for its persistent storage.

Create a Persistent Volume Claim with the pure-block storage class.

kubectl apply -f minio-pvc.yaml

apiVersion: v1
 
kind: PersistentVolumeClaim
 
metadata:
 
name: minio-pvc
 
spec:
 
storageClassName: pure-block
 
accessModes:
 
- ReadWriteOnce
 
resources:
 
requests:
 
storage: 1Gi

This will result in a new persistent volume claim and a corresponding persistent volume which exists on the FlashArray.

This volume has been created on the FlashArray, but as you'll notice is not yet connected to any host.

Deploy MinIO and use the minio-pvc PVC we created previously as the storage volume.

kubectl apply -f minio-deployment.yaml

apiVersion: apps/v1
 
kind: Deployment
 
metadata:
 
# This name uniquely identifies the Deployment
 
name: minio-deployment-pure
 
spec:
 
selector:
 
matchLabels:
 
app: minio-pure
 
strategy:
 
type: Recreate
 
template:
 
metadata:
 
labels:
 
# Label is used as selector in the service.
 
app: minio-pure
 
spec:
 
# Refer to the PVC created earlier
 
volumes:
 
- name: storage
 
persistentVolumeClaim:
 
# Name of the PVC created earlier
 
claimName: minio-pvc
 
containers:
 
- name: minio-pure
 
# Pulls the default Minio image from Docker Hub
 
image: minio/minio:latest
 
args:
 
- server
 
- /storage
 
env:
 
# Minio access key and secret key
 
- name: MINIO_ACCESS_KEY
 
value: "minio"
 
- name: MINIO_SECRET_KEY
 
value: "minio123"
 
ports:
 
- containerPort: 9000
 
hostPort: 9001
 
# Mount the volume into the pod
 
volumeMounts:
 
- name: storage # must match the volume name, above
 
mountPath: "/storage"

We now have a MinIO pod deployed on node k8s-worker3.

If we go back and review our FlashArray volume, we can see that a host entry for the node k8s-worker3 has been created and the volume has been connected to that host.

Create as service so we can access the MinIO UI.

kubectl apply -f minio-svc.yaml

apiVersion: v1
 
kind: Service
 
metadata:
 
name: minio-service
 
spec:
 
type: NodePort
 
ports:
 
- port: 9001
 
targetPort: 9000
 
protocol: TCP
 
selector:
 
app: minio-pure

View the new service

kubectl get svc

Now if we browse to port 32320 on node k8s-worker3 we should connect to the MinIO UI.

Log in (minio/minio123), create a bucket, and upload some data.

Now we'll destroy the MinIO pod and allow the deployment to create a new instance. This time it's on node k8s-worker2.

Connecting to the new Pod we find that our file still exists.

We can further investigate by viewing the FlashArray volume and we'll see that it has been automatically reassigned to the new node hosting our redeployed MinIO application.

Summary

With Pure Storage PSO, we've seen that we can leverage persistent FlashArray storage for Kubernetes deployments while letting PSO handle all of the orchestration.

    6740
    0