Home » DevOps » How to Get Kubernetes Running – On Your Own Ubuntu Machine!

About Janaka Bandara

Janaka Bandara

A budding software engineer in the enterprise integration space, and a
simplistic guy who likes to learn, appreciate, help, share, and enjoy
life

How to Get Kubernetes Running – On Your Own Ubuntu Machine!

Note: This article largely borrows from my previous writeup on installing K8s 1.7 on CentOS.

Getting a local K8s cluster up and running is one of the first “baby steps” of toddling into the K8s ecosystem. While it would often be considered easier (and safer) to get K8s set up in a cluster of virtual machines (VMs), it does not really give you the same degree of advantage and flexibility as running K8s “natively” on your own host.

That is why, when we decided to upgrade our Integration Platform for K8s 1.7 compatibility, I decided to go with a native K8s installation for my dev environment, a laptop running Ubuntu 16.04 on 16GB RAM and a ?? CPU. While I could run three—at most four—reasonably powerful VMs in there as my dev K8s cluster, I could do much better in terms of resource saving (the cluster would be just a few low-footprint services, rather than resource-eating VMs) as well as ease of operation and management (start the services, and the cluster is ready in seconds). Whenever I wanted to try multi-node stuff like node groups or zones with fail-over I could simply hook up one or two secondary (worker) VMs to get things going, and shut them down when I’m done.

I had already been running K8s 1.2 on my machine, but the upgrade would not have been easy as it’s a hyperjump from 1.2 to 1.7. Luckily, the K8s guys had written their installation scripts in an amazingly structured and intuitive way, so I could get everything running with around one day’s struggle (most of which went into understanding the command flow and modifying it to suit Ubuntu, and my particular requirements).

I could easily have utilized the official kubeadm guide or the Juju-based installer for Ubuntu, but I wanted to get at least a basic idea of how things get glued together; additionally I also wanted an easily reproducable installation from scratch, with minimal dependencies on external packages or installers, so I could upgrade my cluster any time I desire, by directly building the latest stable—or beta or even alpha, if it comes to that—directly off the K8s source.

I started with the CentOS cluster installer, and gradually modified it to suit Ubuntu 16.04 (luckily the changes were minimal). Most of the things were in line with the installation for CentOS, including artifact build (make at the source root) and modifications to exclude custom downloads for etcd, flanneld and docker (I built the former two from their sources, and the latter I had already installed via the apt package manager). However the service configuration scripts had to be modified slightly to suit the Ubuntu (more precisely Debian) filesystem.

In my case I run apiserver on port 8090 rather than 8080 (to leave room for other application servers that are fond of 8080), hence I had to do some additional changes to propagate the port change throughout the K8s platform.

So, in summary, I had to utilize the following set of patches to get everything in place. If necessary, you could utilize them by applying them on top of the v1.7.2-beta.0 tag (possibly even a different tag) of the K8s source using the git apply command.

  1. Skipping etcd, flannel and docker binaries:
    diff --git a/cluster/centos/build.sh b/cluster/centos/build.sh
    index 5d31437..df057e4 100755
    --- a/cluster/centos/build.sh
    +++ b/cluster/centos/build.sh
    @@ -42,19 +42,6 @@ function clean-up() {
     function download-releases() {
       rm -rf ${RELEASES_DIR}
       mkdir -p ${RELEASES_DIR}
    -
    -  echo "Download flannel release v${FLANNEL_VERSION} ..."
    -  curl -L ${FLANNEL_DOWNLOAD_URL} -o ${RELEASES_DIR}/flannel.tar.gz
    -
    -  echo "Download etcd release v${ETCD_VERSION} ..."
    -  curl -L ${ETCD_DOWNLOAD_URL} -o ${RELEASES_DIR}/etcd.tar.gz
    -
    -  echo "Download kubernetes release v${K8S_VERSION} ..."
    -  curl -L ${K8S_CLIENT_DOWNLOAD_URL} -o ${RELEASES_DIR}/kubernetes-client-linux-amd64.tar.gz
    -  curl -L ${K8S_SERVER_DOWNLOAD_URL} -o ${RELEASES_DIR}/kubernetes-server-linux-amd64.tar.gz
    -
    -  echo "Download docker release v${DOCKER_VERSION} ..."
    -  curl -L ${DOCKER_DOWNLOAD_URL} -o ${RELEASES_DIR}/docker.tar.gz
     }
     
     function unpack-releases() {
    @@ -80,19 +67,12 @@ function unpack-releases() {
       fi
     
       # k8s
    -  if [[ -f ${RELEASES_DIR}/kubernetes-client-linux-amd64.tar.gz ]] ; then
    -    tar xzf ${RELEASES_DIR}/kubernetes-client-linux-amd64.tar.gz -C ${RELEASES_DIR}
         cp ${RELEASES_DIR}/kubernetes/client/bin/kubectl ${BINARY_DIR}
    -  fi
    -
    -  if [[ -f ${RELEASES_DIR}/kubernetes-server-linux-amd64.tar.gz ]] ; then
    -    tar xzf ${RELEASES_DIR}/kubernetes-server-linux-amd64.tar.gz -C ${RELEASES_DIR}
         cp ${RELEASES_DIR}/kubernetes/server/bin/kube-apiserver \
            ${RELEASES_DIR}/kubernetes/server/bin/kube-controller-manager \
            ${RELEASES_DIR}/kubernetes/server/bin/kube-scheduler ${BINARY_DIR}/master/bin
         cp ${RELEASES_DIR}/kubernetes/server/bin/kubelet \
            ${RELEASES_DIR}/kubernetes/server/bin/kube-proxy ${BINARY_DIR}/node/bin
    -  fi
     
       # docker
       if [[ -f ${RELEASES_DIR}/docker.tar.gz ]]; then
    diff --git a/cluster/centos/config-build.sh b/cluster/centos/config-build.sh
    index 4887bc1..39a2b25 100755
    --- a/cluster/centos/config-build.sh
    +++ b/cluster/centos/config-build.sh
    @@ -23,13 +23,13 @@ RELEASES_DIR=${RELEASES_DIR:-/tmp/downloads}
     DOCKER_VERSION=${DOCKER_VERSION:-"1.12.1"}
     
     # Define flannel version to use.
    -FLANNEL_VERSION=${FLANNEL_VERSION:-"0.6.1"}
    +FLANNEL_VERSION=${FLANNEL_VERSION:-"0.8.0"}
     
     # Define etcd version to use.
    -ETCD_VERSION=${ETCD_VERSION:-"3.0.9"}
    +ETCD_VERSION=${ETCD_VERSION:-"3.2.2"}
     
     # Define k8s version to use.
    -K8S_VERSION=${K8S_VERSION:-"1.3.7"}
    +K8S_VERSION=${K8S_VERSION:-"1.7.0"}
     
     DOCKER_DOWNLOAD_URL=\
     "https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_VERSION}.tgz"
    diff --git a/cluster/kube-up.sh b/cluster/kube-up.sh
    index 7877fb9..9e793ce 100755
    --- a/cluster/kube-up.sh
    +++ b/cluster/kube-up.sh
    @@ -40,8 +40,6 @@ fi
     
     echo "... calling verify-prereqs" >&2
     verify-prereqs
    -echo "... calling verify-kube-binaries" >&2
    -verify-kube-binaries
     
     if [[ "${KUBE_STAGE_IMAGES:-}" == "true" ]]; then
       echo "... staging images" >&2
  2. Changing apiserver listen port to 8090, and binary (symlink) and service configuration file locations to Debian defaults:
    diff --git a/cluster/centos/master/scripts/apiserver.sh b/cluster/centos/master/scripts/apiserver.sh
    index 6b7b1c2..62d24fd 100755
    --- a/cluster/centos/master/scripts/apiserver.sh
    +++ b/cluster/centos/master/scripts/apiserver.sh
    @@ -43,8 +43,8 @@ KUBE_ETCD_KEYFILE="--etcd-keyfile=/srv/kubernetes/etcd/client-key.pem"
     # --insecure-bind-address=127.0.0.1: The IP address on which to serve the --insecure-port.
     KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
     
    -# --insecure-port=8080: The port on which to serve unsecured, unauthenticated access.
    -KUBE_API_PORT="--insecure-port=8080"
    +# --insecure-port=8090: The port on which to serve unsecured, unauthenticated access.
    +KUBE_API_PORT="--insecure-port=8090"
     
     # --kubelet-port=10250: Kubelet port
     NODE_PORT="--kubelet-port=10250"
    @@ -101,7 +101,7 @@ KUBE_APISERVER_OPTS="   \${KUBE_LOGTOSTDERR}         \\
                             \${KUBE_API_TLS_PRIVATE_KEY_FILE}"
     
     
    -cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
    +cat <<EOF >/lib/systemd/system/kube-apiserver.service
     [Unit]
     Description=Kubernetes API Server
     Documentation=https://github.com/kubernetes/kubernetes
    diff --git a/cluster/centos/master/scripts/controller-manager.sh b/cluster/centos/master/scripts/controller-manager.sh
    index 3025d06..5aa0f12 100755
    --- a/cluster/centos/master/scripts/controller-manager.sh
    +++ b/cluster/centos/master/scripts/controller-manager.sh
    @@ -20,7 +20,7 @@ MASTER_ADDRESS=${1:-"8.8.8.18"}
     cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
     KUBE_LOGTOSTDERR="--logtostderr=true"
     KUBE_LOG_LEVEL="--v=4"
    -KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"
    +KUBE_MASTER="--master=${MASTER_ADDRESS}:8090"
     
     # --root-ca-file="": If set, this root certificate authority will be included in
     # service account's token secret. This must be a valid PEM-encoded CA bundle.
    @@ -41,7 +41,7 @@ KUBE_CONTROLLER_MANAGER_OPTS="  \${KUBE_LOGTOSTDERR} \\
                                     \${KUBE_CONTROLLER_MANAGER_SERVICE_ACCOUNT_PRIVATE_KEY_FILE}\\
                                     \${KUBE_LEADER_ELECT}"
     
    -cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
    +cat <<EOF >/lib/systemd/system/kube-controller-manager.service
     [Unit]
     Description=Kubernetes Controller Manager
     Documentation=https://github.com/kubernetes/kubernetes
    diff --git a/cluster/centos/master/scripts/etcd.sh b/cluster/centos/master/scripts/etcd.sh
    index aa73b57..34eff5c 100755
    --- a/cluster/centos/master/scripts/etcd.sh
    +++ b/cluster/centos/master/scripts/etcd.sh
    @@ -64,7 +64,7 @@ ETCD_PEER_CERT_FILE="/srv/kubernetes/etcd/peer-${ETCD_NAME}.pem"
     ETCD_PEER_KEY_FILE="/srv/kubernetes/etcd/peer-${ETCD_NAME}-key.pem"
     EOF
     
    -cat <<EOF >//usr/lib/systemd/system/etcd.service
    +cat <<EOF >/lib/systemd/system/etcd.service
     [Unit]
     Description=Etcd Server
     After=network.target
    diff --git a/cluster/centos/master/scripts/flannel.sh b/cluster/centos/master/scripts/flannel.sh
    index 092fcd8..21e2bbe 100644
    --- a/cluster/centos/master/scripts/flannel.sh
    +++ b/cluster/centos/master/scripts/flannel.sh
    @@ -30,7 +30,7 @@ FLANNEL_ETCD_CERTFILE="--etcd-certfile=${CERT_FILE}"
     FLANNEL_ETCD_KEYFILE="--etcd-keyfile=${KEY_FILE}"
     EOF
     
    -cat <<EOF >/usr/lib/systemd/system/flannel.service
    +cat <<EOF >/lib/systemd/system/flannel.service
     [Unit]
     Description=Flanneld overlay address etcd agent
     After=network.target
    diff --git a/cluster/centos/master/scripts/scheduler.sh b/cluster/centos/master/scripts/scheduler.sh
    index 1a68d71..3b444bf 100755
    --- a/cluster/centos/master/scripts/scheduler.sh
    +++ b/cluster/centos/master/scripts/scheduler.sh
    @@ -27,7 +27,7 @@ KUBE_LOGTOSTDERR="--logtostderr=true"
     # --v=0: log level for V logs
     KUBE_LOG_LEVEL="--v=4"
     
    -KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"
    +KUBE_MASTER="--master=${MASTER_ADDRESS}:8090"
     
     # --leader-elect
     KUBE_LEADER_ELECT="--leader-elect"
    @@ -43,7 +43,7 @@ KUBE_SCHEDULER_OPTS="   \${KUBE_LOGTOSTDERR}     \\
                             \${KUBE_LEADER_ELECT}    \\
                             \${KUBE_SCHEDULER_ARGS}"
     
    -cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
    +cat <<EOF >/lib/systemd/system/kube-scheduler.service
     [Unit]
     Description=Kubernetes Scheduler
     Documentation=https://github.com/kubernetes/kubernetes
    diff --git a/cluster/centos/node/bin/mk-docker-opts.sh b/cluster/centos/node/bin/mk-docker-opts.sh
    index 041d977..177ee9f 100755
    --- a/cluster/centos/node/bin/mk-docker-opts.sh
    +++ b/cluster/centos/node/bin/mk-docker-opts.sh
    @@ -69,7 +69,6 @@ done
     
     if [[ $indiv_opts = false ]] && [[ $combined_opts = false ]]; then
       indiv_opts=true
    -  combined_opts=true
     fi
     
     if [[ -f "$flannel_env" ]]; then
    diff --git a/cluster/centos/node/scripts/docker.sh b/cluster/centos/node/scripts/docker.sh
    index 320446a..b0312fc 100755
    --- a/cluster/centos/node/scripts/docker.sh
    +++ b/cluster/centos/node/scripts/docker.sh
    @@ -20,10 +20,10 @@ DOCKER_OPTS=${1:-""}
     DOCKER_CONFIG=/opt/kubernetes/cfg/docker
     
     cat <<EOF >$DOCKER_CONFIG
    -DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -s overlay --selinux-enabled=false ${DOCKER_OPTS}"
    +DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -s aufs --selinux-enabled=false ${DOCKER_OPTS}"
     EOF
     
    -cat <<EOF >/usr/lib/systemd/system/docker.service
    +cat <<EOF >/lib/systemd/system/docker.service
     [Unit]
     Description=Docker Application Container Engine
     Documentation=http://docs.docker.com
    @@ -35,7 +35,7 @@ Type=notify
     EnvironmentFile=-/run/flannel/docker
     EnvironmentFile=-/opt/kubernetes/cfg/docker
     WorkingDirectory=/opt/kubernetes/bin
    -ExecStart=/opt/kubernetes/bin/dockerd \$DOCKER_OPT_BIP \$DOCKER_OPT_MTU \$DOCKER_OPTS
    +ExecStart=/usr/bin/dockerd \$DOCKER_OPT_BIP \$DOCKER_OPT_MTU \$DOCKER_OPTS
     LimitNOFILE=1048576
     LimitNPROC=1048576
     
    diff --git a/cluster/centos/node/scripts/flannel.sh b/cluster/centos/node/scripts/flannel.sh
    index 2830dae..a927bb2 100755
    --- a/cluster/centos/node/scripts/flannel.sh
    +++ b/cluster/centos/node/scripts/flannel.sh
    @@ -30,7 +30,7 @@ FLANNEL_ETCD_CERTFILE="--etcd-certfile=${CERT_FILE}"
     FLANNEL_ETCD_KEYFILE="--etcd-keyfile=${KEY_FILE}"
     EOF
     
    -cat <<EOF >/usr/lib/systemd/system/flannel.service
    +cat <<EOF >/lib/systemd/system/flannel.service
     [Unit]
     Description=Flanneld overlay address etcd agent
     After=network.target
    diff --git a/cluster/centos/node/scripts/kubelet.sh b/cluster/centos/node/scripts/kubelet.sh
    index 323a03e..4c93015 100755
    --- a/cluster/centos/node/scripts/kubelet.sh
    +++ b/cluster/centos/node/scripts/kubelet.sh
    @@ -39,7 +39,7 @@ NODE_HOSTNAME="--hostname-override=${NODE_ADDRESS}"
     
     # --api-servers=[]: List of Kubernetes API servers for publishing events,
     # and reading pods and services. (ip:port), comma separated.
    -KUBELET_API_SERVER="--api-servers=${MASTER_ADDRESS}:8080"
    +KUBELET_API_SERVER="--api-servers=${MASTER_ADDRESS}:8090"
     
     # --allow-privileged=false: If true, allow containers to request privileged mode. [default=false]
     KUBE_ALLOW_PRIV="--allow-privileged=false"
    @@ -63,7 +63,7 @@ KUBE_PROXY_OPTS="   \${KUBE_LOGTOSTDERR}     \\
                         \${KUBELET_DNS_DOMAIN}      \\
                         \${KUBELET_ARGS}"
     
    -cat <<EOF >/usr/lib/systemd/system/kubelet.service
    +cat <<EOF >/lib/systemd/system/kubelet.service
     [Unit]
     Description=Kubernetes Kubelet
     After=docker.service
    diff --git a/cluster/centos/node/scripts/proxy.sh b/cluster/centos/node/scripts/proxy.sh
    index 584987b..1f365fb 100755
    --- a/cluster/centos/node/scripts/proxy.sh
    +++ b/cluster/centos/node/scripts/proxy.sh
    @@ -29,7 +29,7 @@ KUBE_LOG_LEVEL="--v=4"
     NODE_HOSTNAME="--hostname-override=${NODE_ADDRESS}"
     
     # --master="": The address of the Kubernetes API server (overrides any value in kubeconfig)
    -KUBE_MASTER="--master=http://${MASTER_ADDRESS}:8080"
    +KUBE_MASTER="--master=http://${MASTER_ADDRESS}:8090"
     EOF
     
     KUBE_PROXY_OPTS="   \${KUBE_LOGTOSTDERR} \\
    @@ -37,7 +37,7 @@ KUBE_PROXY_OPTS="   \${KUBE_LOGTOSTDERR} \\
                         \${NODE_HOSTNAME}    \\
                         \${KUBE_MASTER}"
     
    -cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
    +cat <<EOF >/lib/systemd/system/kube-proxy.service
     [Unit]
     Description=Kubernetes Proxy
     After=network.target
    diff --git a/cluster/centos/util.sh b/cluster/centos/util.sh
    index 88302a3..dbb3ca5 100755
    --- a/cluster/centos/util.sh
    +++ b/cluster/centos/util.sh
    @@ -136,7 +136,7 @@ function kube-up() {
     
       # set CONTEXT and KUBE_SERVER values for create-kubeconfig() and get-password()
       export CONTEXT="centos"
    -  export KUBE_SERVER="http://${MASTER_ADVERTISE_ADDRESS}:8080"
    +  export KUBE_SERVER="http://${MASTER_ADVERTISE_ADDRESS}:8090"
       source "${KUBE_ROOT}/cluster/common.sh"
     
       # set kubernetes user and password
    @@ -199,7 +199,7 @@ function troubleshoot-node() {
     function tear-down-master() {
     echo "[INFO] tear-down-master on $1"
       for service_name in etcd kube-apiserver kube-controller-manager kube-scheduler ; do
    -      service_file="/usr/lib/systemd/system/${service_name}.service"
    +      service_file="/lib/systemd/system/${service_name}.service"
           kube-ssh "$1" " \
             if [[ -f $service_file ]]; then \
               sudo systemctl stop $service_name; \
    @@ -217,7 +217,7 @@ echo "[INFO] tear-down-master on $1"
     function tear-down-node() {
     echo "[INFO] tear-down-node on $1"
       for service_name in kube-proxy kubelet docker flannel ; do
    -      service_file="/usr/lib/systemd/system/${service_name}.service"
    +      service_file="/lib/systemd/system/${service_name}.service"
           kube-ssh "$1" " \
             if [[ -f $service_file ]]; then \
               sudo systemctl stop $service_name; \

With these changes in place, installing K8s on my machine was straightforward, which included simply running this command from within the cluster directory of the patched source:

[email protected] \
[email protected] \
DOCKER_OPTS="--insecure-registry=hub.adroitlogic.com:5000" \
KUBERNETES_PROVIDER=centos \
CERT_GROUP=janaka \
./kube-up.sh
  1. Because the certificate generation process has problems with localhost or 127.0.0.1 I had to use a static IP (10.0.0.1) assigned to one of my network interfaces.
  2. I have one master and one worker node, both of which are my own machine itself.
  3. janaka is the username on my local machine.
  4. We have a local Docker hub for holding our IPS images, at hub.adroitlogic.com:5000 (resolved via internal hostname mappings), which we need to inject to the Docker startup script (in order to be able to utilize our own images within the future IPS set-up).

Takeaway?

  • learned a bit about how K8s components fit together,
  • added quite a bit of shell scripting tips and tricks to my book of knowledge,
  • had my 101 on [Linux services (systemd)],
  • skipped one lunch,
  • fought several hours of frustration,
  • and finally felt a truckload of satisfaction, which flushed out the frustration without a trace.
Do you want to know how to develop your skillset to become a sysadmin Rockstar?
Subscribe to our newsletter to start Rocking right now!
To get you started we give you our best selling eBooks for FREE!
1. Introduction to NGINX
2. Apache HTTP Server Cookbook
3. VirtualBox Essentials
4. Nagios Monitoring Cookbook
5. Linux BASH Programming Cookbook
6. Postgresql Database Tutorial
and many more ....
Email address:

Leave a Reply

Be the First to Comment!

Notify of
avatar
wpDiscuz