Table of Contents
libvirt-coreos use casekube-* scripts.libvirt-coreos use caseThe primary goal of the libvirt-coreos cluster provider is to deploy a multi-node Kubernetes cluster on local VMs as fast as possible and to be as light as possible in term of resources used.
In order to achieve that goal, its deployment is very different from the “standard production deployment” method used on other providers. This was done on purpose in order to implement some optimizations made possible by the fact that we know that all VMs will be running on the same physical machine.
The libvirt-coreos cluster provider doesn’t aim at being production look-alike.
Another difference is that no security is enforced on libvirt-coreos at all. For example,
So, an k8s application developer should not validate its interaction with Kubernetes on libvirt-coreos because he might technically succeed in doing things that are prohibited on a production environment like:
On the other hand, libvirt-coreos might be useful for people investigating low level implementation of Kubernetes because debugging techniques like sniffing the network traffic or introspecting the etcd content are easier on libvirt-coreos than on a production deployment.
systemctl enable libvirtdsystemctl start libvirtdYou can test it with the following command:
virsh -c qemu:///system pool-listIf you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules as follows to grant full access to libvirt to $USER
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOFIf your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
$ ls -l /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
$ usermod -a -G libvirtd $USER
# $USER needs to logout/login to have the new group be taken into account(Replace $USER with your login name)
All the disk drive resources needed by the VM (CoreOS disk image, Kubernetes binaries, cloud-init files, etc.) are put inside ./cluster/libvirt-coreos/libvirt_storage_pool.
As we’re using the qemu:///system instance of libvirt, qemu will run with a specific user:group distinct from your user. It is configured in /etc/libvirt/qemu.conf. That qemu user must have access to that libvirt storage pool.
If your $HOME is world readable, everything is fine. If your $HOME is private, cluster/kube-up.sh will fail with an error message like:
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission deniedIn order to fix that issue, you have several possibilities:
POOL_PATH inside cluster/libvirt-coreos/config-default.sh to a directory:
On Arch:
setfacl -m g:kvm:--x ~By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
To start your local cluster, open a shell and run:
cd kubernetes
export KUBERNETES_PROVIDER=libvirt-coreos
cluster/kube-up.shThe KUBERNETES_PROVIDER environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
The NUM_MINIONS environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
The KUBE_PUSH environment variable may be set to specify which Kubernetes binaries must be deployed on the cluster. Its possible values are:
release (default if KUBE_PUSH is not set) will deploy the binaries of _output/release-tars/kubernetes-server-….tar.gz. This is built with make release or make release-skip-tests.local will deploy the binaries of _output/local/go/bin. These are built with make.You can check that your machines are there and running with:
$ virsh -c qemu:///system list
Id Name State
----------------------------------------------------
15 kubernetes_master running
16 kubernetes_minion-01 running
17 kubernetes_minion-02 running
18 kubernetes_minion-03 runningYou can check that the Kubernetes cluster is working with:
$ kubectl get nodes
NAME LABELS STATUS
192.168.10.2 <none> Ready
192.168.10.3 <none> Ready
192.168.10.4 <none> ReadyThe VMs are running CoreOS.
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
The user to use to connect to the VM is core.
The IP to connect to the master is 192.168.10.1.
The IPs to connect to the nodes are 192.168.10.2 and onwards.
Connect to kubernetes_master:
ssh core@192.168.10.1Connect to kubernetes_minion-01:
ssh core@192.168.10.2kube-* scripts.All of the following commands assume you have set KUBERNETES_PROVIDER appropriately:
export KUBERNETES_PROVIDER=libvirt-coreosBring up a libvirt-CoreOS cluster of 5 nodes
NUM_MINIONS=5 cluster/kube-up.shDestroy the libvirt-CoreOS cluster
cluster/kube-down.shUpdate the libvirt-CoreOS cluster with a new Kubernetes release produced by make release or make release-skip-tests:
cluster/kube-push.shUpdate the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by make:
KUBE_PUSH=local cluster/kube-push.shInteract with the cluster
kubectl ...Build the release tarballs:
make releaseInstall libvirt
On Arch:
pacman -S qemu libvirtOn Ubuntu 14.04.1:
aptitude install qemu-system-x86 libvirt-binOn Fedora 21:
yum install qemu libvirtStart the libvirt daemon
On Arch:
systemctl start libvirtdOn Ubuntu 14.04.1:
service libvirt-bin startFix libvirt access permission (Remember to adapt $USER)
On Arch and Fedora 21:
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOFOn Ubuntu:
usermod -a -G libvirtd $USEREnsure libvirtd has been restarted since ebtables was installed.