This post is part of a multi-part series on how to use NetApp storage platforms to present persistent volumes in Kubernetes. The other posts in this series are:
Kubernetes is an open source project for automating deployment, operations, and scaling of containerized applications that came out of Google in June 2014. The community around Kubernetes has since exploded and is being adopted as one of the leading container deployment solutions.
A problem many run into with using containerized applications is what to do
with their data. Data written inside of a container is ephemeral and only exist
for the lifetime of the container it’s written in. To solve this problem,
Kubernetes offers a
PersistentVolume subsystem that abstracts the details of
how storage is provided from how it is consumed.
PersistentVolume API provides several plugins for integrating
your storage into Kubernetes for containers to consume. In this post, we’ll
focus on how to use the NFS plugin with ONTAP. More specifically, we will
use a slightly modified version of the NFS example
in the Kubernetes source code.
For this post, a single node clustered Data ONTAP 8.3 simulator was used. The setup and commands used are no different than what would be used in a production setup using real hardware.
In this setup, Kubernetes 1.2.2 was used in a single master and single node setup running on VirtualBox using Vagrant. For tutorials on how to run Kubernetes in nearly any configuration and on any platform you can imagine, check out the Kubernetes Getting Started guides.
The setup for ONTAP consists of the following steps.
- Create a Storage Virtual Machine (SVM) to host your NFS volumes
- Enable NFS for the SVM created
- Create a data LIF for Kubernetes to use
- Create an export policy to allow the Kubernetes hosts to connect
- Create an NFS volume for Kubernetes to use
Of course you can skip some of these steps if you already have what you need there.
Here is an example that follows these steps:
Create a Storage Virtual Machine (SVM) to host your NFS volumes
Enable NFS for the SVM created
Create a data LIF for Kubernetes to use
The values specified in this example is specific to our ONTAP simulator. Update the appropriate values to match your environment.
Create an export policy to allow the Kubernetes hosts to connect
In this case, we are allowing any host to connect by specifying
clientmatch. It’s unlikely you’d want to do this in production and should
instead set the value to match the IP range of your Kubernetes hosts.
Create an NFS volume for Kubernetes to use
Now that we have an NFS volume, we need to let Kubernetes know about it. To do
this, we will create a
PersistentVolume and a
PersistentVolume definition and save it as
Then create a
PersistentVolumeClaim that uses the
PersistentVolume and save
Now that we have a
PersistentVolume definition and a
definition, we need to create them in Kubernetes.
At this point, we can spin up a container that uses the
we just created. To show this in action, we’ll continue using the
NFS example from the Kubernetes source code.
First, we’ll setup a “fake” backend that updates an
index.html file every 5
to 10 seconds with the current time and hostname of the pod doing the update.
Save the “fake” backend as
Create the “fake” backend in Kubernetes.
Next, we’ll create a web server that also uses the NFS mount to serve the
index.html file being generated by the “fake” backend.
The web server consists of a pod definition and a service definition.
Save the pod definition as
Save the service definition as
Create the web server in Kubernetes.
Now that everything is setup and running, we can verify that it is working as
expected. Using the busybox container we launched earlier, we can make a request
nginx to check that the data is being served properly.
As can be seen in this example, when we made a request to
nginx, the last pod
to have updated the
index.html file was
Tue Apr 12 19:56:18 UTC 2016. We can continue to make a request to
and watch this data get updated every 5-10 seconds.