+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release + [[ k8s-1.11.0-release =~ openshift-.* ]] + [[ k8s-1.11.0-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/08/04 23:12:51 Waiting for host: 192.168.66.101:22 2018/08/04 23:12:54 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/04 23:13:02 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/04 23:13:07 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/08/04 23:13:12 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0804 23:13:13.043421 1266 feature_gate.go:230] feature gates: &{map[]} I0804 23:13:13.118766 1266 kernel_validator.go:81] Validating kernel version I0804 23:13:13.118895 1266 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 63.007765 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:a8e9804ee381b4f825d2bc6b03dc0432e041efa7eff57d7af5b544fced2acc02 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/04 23:14:34 Waiting for host: 192.168.66.102:22 2018/08/04 23:14:37 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/04 23:14:45 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/04 23:14:50 Connected to tcp://192.168.66.102:22 ++ wc -l ++ grep active ++ systemctl status docker + [[ 0 -eq 0 ]] + sleep 2 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0804 23:14:54.974253 1264 kernel_validator.go:81] Validating kernel version I0804 23:14:54.974706 1264 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 1m v1.11.0 node02 Ready 45s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 1m v1.11.0 node02 Ready 46s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33203/kubevirt/virt-controller:devel Untagged: localhost:33203/kubevirt/virt-controller@sha256:d664249f3c5ba55d7afa5c43c72a84a289aa2da3768be970039a72ac58814e28 Deleted: sha256:f50c0336f6546abdb6374af4b1195bc059123cfe45c517603f7cdcfbf2d35fc9 Deleted: sha256:0eafc2360aae0af853c9a324bd7b027d43d2e7e622e939f04d24d54a5e7c53e4 Deleted: sha256:b9227697bdaabf39a9c8ce4000c85cc6c893d25b14c5c93bed9d93ef2466683f Deleted: sha256:0efaa0616620c0a2ec8e6285953baf55c46dc64fa9329caf969ba46a063b46f6 Untagged: localhost:33203/kubevirt/virt-launcher:devel Untagged: localhost:33203/kubevirt/virt-launcher@sha256:0099775a1b0a615c860b626dcbee2c459905ace715789f4cc9d753391cae1e56 Deleted: sha256:b501a2af91ce6d89c6b02709836b2affa648cb0c959ea586c8793207f329cb5c Deleted: sha256:320acc464431a96ce7d55eb2e0f0388b024de2ee8bf35ce91203fd78f1b2107e Deleted: sha256:262b2014e14aa5d9ba31ec5fb4d9bdfb408f78347728630eb0e0336b8ee1c36e Deleted: sha256:7b5c19607fdb1d23775bb96b8557adfdc246630b6e34a4fb00fa4ba017c0c785 Deleted: sha256:b2758fbadc68691cd2584d1dd8af03894d5e63692ab28af8e6d613dbc0130a31 Deleted: sha256:36313013fc5ac556d425f1d47a528e9175e6d4efe915f5b4f0104cff423028ad Deleted: sha256:94e4ee259ff26994579cd30dc8a583fef605a636fbea47698d641d34dbb78490 Deleted: sha256:7a2950d7d1d31eb3ddb5dff7e94394d2abb95fe7179762e66818553afa37182c Deleted: sha256:bc984c1ebb19ca7b3c74e7f7b87c03029aef707392a072535b341153d42ef2d2 Deleted: sha256:8eccefce58689ea4494cc7fa24372be99082c1d5a22bdfce476e5ae5a2376814 Deleted: sha256:dd346759d2afbec91bfc73c5f9449d16c4c13cf4ccb769fc5cff981355d5b02a Deleted: sha256:d4b2c7a052c380b8ce458df1fb448269be0a0bc90744bf8efbaaf0e1ae537927 Untagged: localhost:33203/kubevirt/virt-handler:devel Untagged: localhost:33203/kubevirt/virt-handler@sha256:261d788a74ba2cef3b1202677d722b5517f36467f4c28241f220ee5f087e9f78 Deleted: sha256:8ed0e6e0d945120ae7d220e5f2362f4d02506ffd94097fcc4cd1c379260599ee Deleted: sha256:a5a36d644e20d399b3296e0f9405d098f6c0c010e4884d0a26ccf31472391f79 Deleted: sha256:2c2922581f0a1118402d140acae22af45f00bee717a1ceb35fab047fb8a40e69 Deleted: sha256:bf9ff90805b3a7e120455d33e47ee822af1be768d7f2cf459499595bd2b332f9 Untagged: localhost:33203/kubevirt/virt-api:devel Untagged: localhost:33203/kubevirt/virt-api@sha256:1af45f9cad24008ec2e9a45ccbb9ea83346bbe2780da5b547124fa429f11e4e1 Deleted: sha256:7fe7359c1ab9f2eb4057cbaea74d50929dbf019ca185cccc48ae8d04872a3c41 Deleted: sha256:cc7bd87cbf0af108bf4f8054ea69a7efbc1da7a72d2e16572d303bab8b043ae1 Deleted: sha256:cc4cbaa68e8917813339e1bc0915565499719e58ff33935f870a903becaaff56 Deleted: sha256:53473abe6f45ecf0d6d781beca4c54565fe9c52fd1f6bf7fa598c08f9438b9d5 Untagged: localhost:33203/kubevirt/subresource-access-test:devel Untagged: localhost:33203/kubevirt/subresource-access-test@sha256:1122e2d58c9622e3ffa2e40fe107a3cf114d63034c738de96324cc2c221ced7b Deleted: sha256:9395a7e5ff0bfb8452ce74ee637e50ad3a24486f4ce77d5f7f7b4ef94e8456fd Deleted: sha256:59f3ae7494abf16b6bf569b8528ecd5d23677e4cf85d9e47853d6a358c64eeda Deleted: sha256:26a03b6a0114d2ff29f1aa017e43a86fec5c9c2b3022e5112be7235a865cfe4c Deleted: sha256:ec8e71a69204bdfadf6242f795b327df84921410c16e13756a6e73c0886cfa4f Untagged: localhost:33166/kubevirt/example-hook-sidecar:devel Untagged: localhost:33166/kubevirt/example-hook-sidecar@sha256:1f7e694b4a9a4f4bcce0a83afe5f63c969aa88c258317452ac73c549595f5f86 Deleted: sha256:669c46f5cfe4fd0916c863f91a702efb476f9d3a0d39a039b92a0d7e8b5b0759 Deleted: sha256:ef7b3a4ddc2c6fb4538bfad0d0f48477edcc280ee909c02c4d72da4822a5c39d Deleted: sha256:f2ce2928f8846aa817d1293ef8ce4411a54ff42a6444934c6e10978ecbfe8145 Deleted: sha256:59ae90eb4f54d0e6724c1d894dfcb9f15ce9ea682057a2d7c96883840f78a401 sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory