Tale of Two Virualizations: Where they are now

A while back I looked at Clear Linux Container and Hyper projects. They were pushing unification of hypervisors and containers. Now I see these projects merge to wider ecosystem.

Clear Linux Container is aiming Docker land, its Docker execdriver is available here.

Hyper is probably aiming for Kubernetes inclusion, though I haven’t seen a pull request. Their container engine code appears ready to fly though.

A Very Rough Performance Comparison: File vs. TCM Loop vs. Loopback

Following up with my previous investigation on loopback setup.

Test Environment:
Fedora 21 x86_64, 50GB RAM, 24 Core Intel(R) Xeon(R) CPU X5650 @ 2.67GHz, Kernel 3.17.8-300.fc21.x86_64

File used for TCM loop and loopback is 200GB created in an ext4 filesystem.

XFS is built on top of loopback and TCM loop

I am sure there are problems with these simplistic tests. Would love to see if they could be reproduced elsewhere.

Small IO

fio options: –ioengine=libaio –iodepth=4 –rw=rw –bs=4k –size=50G –numjobs=4

 Type ext4 File TCM Loop + XFS Loopback + XFS
 RW Bandwidth  53MB/s  66MB/s  61MB/s

Large IO

fio options: –ioengine=sync –iodepth=4 –rw=rw –bs=1m –size=50G –numjobs=4

 RW Bandwidth  112MB/s  109MB/s  95MB/s

Loopback suffers from so-called double-caching problem where page cache is allocated twice for the same on-disk block. There are attempts to fix it using O_DIRECT but none have been merged into kernel or loopback mount yet. Parallels’s ploop is an O_DIRECT enabled loopback variant but I haven’t tested it.

Light, Sound, and Net

This is a collection of information of using light (LED) and sound as network transport media.

  • Open VLC. Publication can be found here. Github repo is here.
  • Linux Light Bulbs: Publication can be found here, no reference to any source (yet).
  • Sound Communication. One publication from the same research group can be found here.
  • SoundWire and PulseAudio: Sound over Network (or Network over Sound)

Trim Down Kubernetes Node: Use Sidecar Pod

It is not uncommon to see Kubernetes nodes are minimally configured. Both Red Hat Atomic and CoreOS only occupy small footprint. The idea is that utilities can be loaded and executed in containers.

So this is dilemma that Kubernetes has to deal with. As I am pumping more and more volume plugins, I am increasingly asking more packages (Ceph, Glusterfs, iSCSI, and Fibre Channel) to be available on the hosts.

So that where the Sidecar Pod solution comes to shine. A Sidecar Pod is a Pod that is created, tracked, and stopped by Kubelet, rather than API server. This Pod’s purpose is to encapsulate utilities that Kubelet needs to finish, for instance, creating volumes on the host. Sidecar Pod is created on the fly and silently exit without API server’s notice.

I posted this issue and followed with a pull request. The user-visible change to a Pod is to add a container’s name as a sidecar.

To demonstrate this Sidecar Pod usage, I created a Pod using rbd volume:

[code language=”bash”]

[root@host kubernetes]# ./cluster/kubectl.sh create -f sidecar.yaml
replicationcontroller "web" created


I looked at the Pods immediately and found two Pods are created, the Pod rbd-sidecar-qdsl8 is created by kubelet.

[code language=”bash”]
[root@host kubernetes]# ./cluster/kubectl.sh get pod
rbd-sidecar-qdsl8 0/1 Image: ceph/base is ready, container is creating 0 1s
web-fm2hn 0/1 Pending 0 2s

After a while, the web Pod was created:

[code language=”bash”]
[root@host kubernetes]# ./cluster/kubectl.sh get pod
web-fm2hn 1/1 Running 0 2m

To see what Sidecar Pod had done, on the Kubernetes node, I looked at the container history:

[code language=”bash”]
# docker ps -a |grep ceph |head
67e96093fed2 ceph/base "rbd lock add foo kub" 3 minutes ago Exited (0) 3 minutes ago k8s_rbd-sidecar.6f32a81f_rbd-sidecar-2t6yr_default_9fafe0cc-571d-11e5-a098-d4bed9b38fad_08ddc932
7ffb1e66580b ceph/base "rbd lock list foo –" 3 minutes ago Exited (0) 3 minutes ago k8s_rbd-sidecar.46639472_rbd-sidecar-a0gyc_default_9e5e658d-571d-11e5-a098-d4bed9b38fad_2c2a3b11
4843dae36466 ceph/base "rbd map foo –pool k" 3 minutes ago Exited (0) 3 minutes ago k8s_rbd-sidecar.60e6922b_rbd-sidecar-qdsl8_default_9d173a4d-571d-11e5-a098-d4bed9b38fad_0fcf4e57

So this just said my Sidecar container created rbd volumes.


Anatomy of docker run

As explained in Docker API, docker run command comprises of several API calls. Also seen in run.go, first the image is pulled if it is not locally available, then create container, attach to the container (if not detached), start the container, and wait.

So it is possible to simulate docker run with a combination of docker create, docker start, and docker logs:

[code language=”bash”]

#docker logs $(docker start $(docker create centos bash -c "date > /dev/null"))


This could help write a docker run using Docker API in Go

Create Filesystem on Sparse Files

Create an ext4 filesystem on a file.

[code language="bash"]
[root@server ~]# truncate -s 1G foo
[root@server ~]# losetup --find --show ./foo
[root@server ~]# mkfs -t ext4 /dev/loop3
[root@server ~]# mount /dev/loop3 /mnt
[root@server ~]# ls /mnt
[root@server ~]# df -h |grep loop3
/dev/loop3 976M 1.3M 908M 1% /mnt