This is an archived version of the documentation. View the latest version here.
Deploy a CoreOS running Kubernetes environment. This particular guide is made to help those in an OFFLINE system, whether for testing a PoC before the real deal, or for when you're totally restricted to be offline for your applications.
Table of Contents
Node Description | MAC | IP |
---|---|---|
CoreOS/etcd/Kubernetes Master | d0:00:67:13:0d:00 | 10.20.30.40 |
CoreOS Slave 1 | d0:00:67:13:0d:01 | 10.20.30.41 |
CoreOS Slave 2 | d0:00:67:13:0d:02 | 10.20.30.42 |
To setup CentOS PXELINUX environment there is a complete guide here. This section is the abbreviated version.
Install packages needed on CentOS
sudo yum install tftp-server dhcp syslinux
vi /etc/xinetd.d/tftp
to enable tftp service and change disable to 'no'
disable = no
Copy over the syslinux images we will need.
su - mkdir -p /tftpboot cd /tftpboot cp /usr/share/syslinux/pxelinux.0 /tftpboot cp /usr/share/syslinux/menu.c32 /tftpboot cp /usr/share/syslinux/memdisk /tftpboot cp /usr/share/syslinux/mboot.c32 /tftpboot cp /usr/share/syslinux/chain.c32 /tftpboot
/sbin/service dhcpd start /sbin/service xinetd start /sbin/chkconfig tftp on
Setup default boot menu
mkdir /tftpboot/pxelinux.cfg touch /tftpboot/pxelinux.cfg/default
Edit the menu vi /tftpboot/pxelinux.cfg/default
default menu.c32 prompt 0 timeout 15 ONTIMEOUT local display boot.msg
MENU TITLE Main Menu
LABEL local MENU LABEL Boot local hard drive LOCALBOOT 0
Now you should have a working PXELINUX setup to image CoreOS nodes. You can verify the services by using VirtualBox locally or with bare metal servers.
This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
/tftpboot/
is our root directory.Download the CoreOS PXE files provided by the CoreOS team.
MY_TFTPROOT_DIR=/tftpboot mkdir -p $MY_TFTPROOT_DIR/images/coreos/ cd $MY_TFTPROOT_DIR/images/coreos/ wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe.vmlinuz.sig wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz wget http://stable.release.core-os.net/amd64-usr/current/coreos_production_pxe_image.cpio.gz.sig gpg --verify coreos_production_pxe.vmlinuz.sig gpg --verify coreos_production_pxe_image.cpio.gz.sig
Edit the menu vi /tftpboot/pxelinux.cfg/default
again
default menu.c32 prompt 0 timeout 300 ONTIMEOUT local display boot.msg
MENU TITLE Main Menu
LABEL local MENU LABEL Boot local hard drive LOCALBOOT 0
MENU BEGIN CoreOS Menu
LABEL coreos-master
MENU LABEL CoreOS Master
KERNEL images/coreos/coreos_production_pxe.vmlinuz
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://
LABEL coreos-slave
MENU LABEL CoreOS Slave
KERNEL images/coreos/coreos_production_pxe.vmlinuz
APPEND initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://
This configuration file will now boot from local drive but have the option to PXE image CoreOS.
This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images.
Add the filename
to the host or subnet sections.
filename "/tftpboot/pxelinux.0";
At this point we want to make pxelinux configuration files that will be the templates for the different CoreOS deployments.
subnet 10.20.30.0 netmask 255.255.255.0 {
next-server 10.20.30.242;
option broadcast-address 10.20.30.255;
filename "
...
# http://www.syslinux.org/wiki/index.php/PXELINUX
host core_os_master {
hardware ethernet d0:00:67:13:0d:00;
option routers 10.20.30.1;
fixed-address 10.20.30.40;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
host core_os_slave {
hardware ethernet d0:00:67:13:0d:01;
option routers 10.20.30.1;
fixed-address 10.20.30.41;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
host core_os_slave2 {
hardware ethernet d0:00:67:13:0d:02;
option routers 10.20.30.1;
fixed-address 10.20.30.42;
option domain-name-servers 10.20.30.242;
filename "/pxelinux.0";
}
...
}
We will be specifying the node configuration later in the guide.
To deploy our configuration we need to create an etcd
master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
This demo we just make a static single etcd
server to host our Kubernetes and etcd
master servers.
Since we are OFFLINE here most of the helping processes in CoreOS and Kubernetes are then limited. To do our setup we will then have to download and serve up our binaries for Kubernetes in our local environment.
An easy solution is to host a small web server on the DHCP/TFTP host for all our binaries to make them available to the local CoreOS PXE machines.
To get this up and running we are going to setup a simple apache
server to serve our binaries needed to bootstrap Kubernetes.
This is on the PXE server from the previous section:
rm /etc/httpd/conf.d/welcome.conf cd /var/www/html/ wget -O kube-register https://github.com/kelseyhightower/kube-register/releases/download/v0.0.2/kube-register-0.0.2-linux-amd64 wget -O setup-network-environment https://github.com/kelseyhightower/setup-network-environment/releases/download/v1.0.0/setup-network-environment wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubernetes --no-check-certificate wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-apiserver --no-check-certificate wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-controller-manager --no-check-certificate wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-scheduler --no-check-certificate wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubectl --no-check-certificate wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubecfg --no-check-certificate wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kubelet --no-check-certificate wget https://storage.googleapis.com/kubernetes-release/release/v0.15.0/bin/linux/amd64/kube-proxy --no-check-certificate wget -O flanneld https://storage.googleapis.com/k8s/flanneld --no-check-certificate
This sets up our binaries we need to run Kubernetes. This would need to be enhanced to download from the Internet for updates in the future.
Now for the good stuff!
The following config files are tailored for the OFFLINE version of a Kubernetes deployment.
These are based on the work found here: master.yml, node.yml
To make the setup work, you need to replace a few placeholders:
<PXE_SERVER_IP>
with your PXE server ip address (e.g. 10.20.30.242)<MASTER_SERVER_IP>
with the Kubernetes master ip address (e.g. 10.20.30.40)rdocker.example.com
with your docker registry dns name.rproxy.example.com
with your proxy server (and port)On the PXE server make and fill in the variables vi /var/www/html/coreos/pxe-cloud-config-master.yml
.
#cloud-config
---
write_files:
- path: /opt/bin/waiter.sh
owner: root
content: |
#! /usr/bin/bash
until curl http://127.0.0.1:4001/v2/machines; do sleep 2; done
- path: /opt/bin/kubernetes-download.sh
owner: root
permissions: 0755
content: |
#! /usr/bin/bash
/usr/bin/wget -N -P "/opt/bin" "http://
On the PXE server make and fill in the variables vi /var/www/html/coreos/pxe-cloud-config-slave.yml
.
#cloud-config
---
write_files:
- path: /etc/default/docker
content: |
DOCKER_EXTRA_OPTS='--insecure-registry="rdocker.example.com:5000"'
coreos:
units:
- name: 10-eno1.network
runtime: true
content: |
[Match]
Name=eno1
[Network]
DHCP=yes
- name: 20-nodhcp.network
runtime: true
content: |
[Match]
Name=en*
[Network]
DHCP=none
- name: etcd.service
mask: true
- name: docker.service
drop-ins:
- name: 50-insecure-registry.conf
content: |
[Service]
Environment="HTTP_PROXY=http://rproxy.example.com:3128/" "NO_PROXY=localhost,127.0.0.0/8,rdocker.example.com"
- name: fleet.service
command: start
content: |
[Unit]
Description=fleet daemon
Wants=fleet.socket
After=fleet.socket
[Service]
Environment="FLEET_ETCD_SERVERS=http://
Create a pxelinux target file for a slave node: vi /tftpboot/pxelinux.cfg/coreos-node-slave
default coreos prompt 1 timeout 15
display boot.msg
label coreos
menu default
kernel images/coreos/coreos_production_pxe.vmlinuz
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://
And one for the master node: vi /tftpboot/pxelinux.cfg/coreos-node-master
default coreos prompt 1 timeout 15
display boot.msg
label coreos
menu default
kernel images/coreos/coreos_production_pxe.vmlinuz
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://
Now that we have our new targets setup for master and slave we want to configure the specific hosts to those targets. We will do this by using the pxelinux mechanism of setting a specific MAC addresses to a specific pxelinux.cfg file.
Refer to the MAC address table in the beginning of this guide. Documentation for more details can be found here.
cd /tftpboot/pxelinux.cfg ln -s coreos-node-master 01-d0-00-67-13-0d-00 ln -s coreos-node-slave 01-d0-00-67-13-0d-01 ln -s coreos-node-slave 01-d0-00-67-13-0d-02
Reboot these servers to get the images PXEd and ready for running containers!
Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system.
See a simple nginx example to try out your new cluster.
For more complete applications, please look in the examples directory.
List all keys in etcd:
etcdctl ls --recursive
List fleet machines
fleetctl list-machines
Check system status of services on master:
systemctl status kube-apiserver systemctl status kube-controller-manager systemctl status kube-scheduler systemctl status kube-register
Check system status of services on a node:
systemctl status kube-kubelet systemctl status docker.service
List Kubernetes
kubectl get pods kubectl get nodes
Kill all pods:
for i in kubectl get pods | awk '{print $1}'
; do kubectl stop pod $i; done