Run Single Node Kubernetes Cluster on OpenStack

Running Kubernetes on OpenStack is surprisingly lacking simple HOWTOs. So I just cook one.

While there is a kube-up.sh in Kubernetes that can (supposedly) spin up a Kubernetes cluster on OpenStack, I find the easiest and quickest way is to use local-up-cluster.sh in Kubernetes source tree.

First, spin up a Nova instance on OpenStack and make sure docker, golang, etcd, openssl are installed.

Then following the instruction from OpenStack to get the RC file:

“Download and source the OpenStack RC file

  1. Log in to the dashboard and from the drop-down list select the project for which you want to download the OpenStack RC file.

  2. On the Project tab, open the Compute tab and click Access & Security.

  3. On the API Access tab, click Download OpenStack RC File and save the file. The filename will be of the form PROJECT-openrc.sh where PROJECT is the name of the project for which you downloaded the file.

  4. Copy the PROJECT-openrc.sh file to the computer from which you want to run OpenStack commands. “

Use the OpenStack RC and create your OpenStack cloud config for Kubernetes using the following format

[code language=”bash”]

# cat /etc/cloud.conf
[Global]
auth-url =
username =
password =
tenant-name =
region =
[/code]

The clone the Kubernetes source tree and apply my patch from PR 25750 (if not merged yet)

Then you can spin up a local cluster under Kubernetes source tree using the following command:

[code language=”bash”]
# find Nova instance name and override hostname
ALLOW_PRIVILIGED=true CLOUD_PROVIDER=openstack CLOUD_CONFIG=/etc/cloud.conf HOSTNAME_OVERRIDE="rootfs-dev" hack/local-up-cluster.sh
[/code]

 

Start Single Kubernetes Cluster on AWS EC2

[code lanaguage=”bash”]

# get a copy of kubernetes source

$ git clone https://github.com/rootfs/kubernetes; cd kubernetes

# put AWS access key id and secret in ~/.aws/credentials like the following
# ~/.aws/credentials
#[default]
#aws_access_key_id = ……
#aws_secret_access_key = ….

# get the host name from EC2 management console and use host name as override
$ ALLOW_PRIVILEGED=true LOG_LEVEL=5 CLOUD_PROVIDER="aws" HOSTNAME_OVERRIDE="ip-172-18-14-238.ec2.internal" hack/local-up-cluster.sh
[/code]

Run Azure CLI on RHEL 7

My usual bookkeeping.

[code langugage=”bash”]

yum install nodejs010-nodejs

source /opt/rh/nodejs010/enable

wget http://aka.ms/linux-azure-cli -O azure-cli.tgz

tar xzvf azure-cli.tgz

cd bin

npm install

# make sure azure account is available and follow the process to authenticate

./azure login

# should be ready to use azure cli now

./azure vm list

# switch to Azure Resource Manager (arm) mode

./azure config mode arm

[/code]

Run Kubernetes End-to-end Volume On CentOS

With a couple of fixes, Kubernetes can run volume e2e tests on a local CentOS cluster.

On Fedora/CentOS/RHEL, after git clone of latest Kubernetes source:

Start up a local cluster 

[code language=”bash”]
ALLOW_PRIVILEGED=true ALLOW_SECURITY_CONTEXT=true hack/local-up-cluster.sh
[/code]

Run Volume e2e tests

[code language=”bash”]
KUBERNETES_PROVIDER=centos KUBERNETES_CONFORMANCE_TEST=y hack/ginkgo-e2e.sh –ginkgo.focus=Volumes
[/code]

That is!

The volume e2e tests consists of testing volume plugins (NFS, Glusterfs, iSCSI, CephFS, Ceph RBD, OpenStack Cinder). Each test will create a containerized server, a client Pod that has a mount path uses the Volume type. The client expects to see a pre-created HTML file on the server. The Persistent Volumes test creates a NFS server, a Persistent Volume (PV) using the NFS backstore and recycle policy, and Persistent Volume Claim (PVC) that is able to bind to the NFS PV. After the PVC is bound, it is immediately deleted, the NFS PV is recycled, deleting all the content on it.

 

More tests cases are welcome!

Yet Another Containerized Mounter for Kubernetes

Not all OSes that run Kubernetes have filesystem mount binaries installed. This gives a reason to come with a solution to package these mount binaries somewhere so Kubernetes to find and run these binaries to mount the filesystems on the host.

Previously I tried to containerize mount binaries and dynamically create a Pod inside kubelet (i.e. so called sidcar container). This works fine but it creates another problem: how to manage the mount Pod’s lifecycle if the mount is a long running process (i.e. FUSE).

Inspired by a recent Kubernetes Storage SIG meeting, I experimented DaemonSet-initiated containerized mount. The flow can be found here.

The experimental code can be found at my repo. The components are:

  • Use Docker 1.10+ to get the mount namespace propagation feature.
  • Update docker systemd unit file so MountFlags is rshared or use a hack
  • Make sure kubelet support privileged containers.
  • A DaemonSet that provides a RESTful server and execute mount command.  I have a simple container for that job. The container is used in the DaemonSet.
  • A ConfigMap that provides information about how to access the DaemonSet. It is defined here
  • A DaemonSet mounter that implements the mount interface.
  • Making the volume plugin use DaemonSet mounter when no filesystem mount binaries are available.

To use DaemonSet mounter, the DaemonSet and ConfigMap must be created first. I provide a script to illustrate how to use this feature to mount Glusterfs.

Cloud Storage Plugin in Kubernetes

Kubernetes can be deployed on multiple Cloud: AWS EC2, Google Cloud, OpenStack, etc. On most Clouds, Kubernetes hosts run on virtual machines. Since Cloud providers already have ways to provision storage volumes (mostly block store such as AWS EBS, GCE PD, and OpenStack Cinder), it makes sense for Kubernetes to call out the Cloud providers to provision storage, attach to the virtual machines, and let virtual machine make the volumes on the device table (mostly thanks to udev). Kubernetes then uses some built-in rules (i.e. searching /dev/disk/by-path) to find the volumes, make filesystem on it (if necessary), and mount the volume.

This process is roughly illustrated below. Note, currently the attach is initiated by kubelet. This function is going to move to k8s master (controller manager) later.

cloud-provider

In this picture, it turns out the components involved are: Cloud providers, volume plugins, and kublet. I explained them a bit in my gist (for better formatting).

Use Azure File Storage in Kubernetes

My recent work on integrating Microsoft Azure File Storage with Kubernetes storage is available for testing.

Azure File Storage is basically a SMB 3.0 file share. Each time a VM needs a file share, you can use your storage account to create one. There is one limitation for Linux VM: for lack of encryption in kernel CIFS implementation, Linux VM must colocate with the file share in the same Azure region. Thus, for now Kubernetes hosts must live in Azure Computing VMs to access their Azure file share.

It is also possible to use Azure Block Blob storage for Kubernetes, though that’ll require more efforts and new APIs from Azure.