+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/15 07:47:32 Waiting for host: 192.168.66.101:22 2018/07/15 07:47:36 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/15 07:47:44 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/15 07:47:49 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/07/15 07:47:54 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 25.507837 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:264a0eca06775a75b5804097c1fd8bb85acb7c00ebe53382bd64e082eb955402 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/15 07:48:34 Waiting for host: 192.168.66.102:22 2018/07/15 07:48:37 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/15 07:48:49 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 36s v1.10.3 node02 NotReady 10s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n 'node02 NotReady 10s v1.10.3' ']' + echo 'Waiting for all nodes to become ready ...' Waiting for all nodes to become ready ... + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 37s v1.10.3 node02 Ready 11s v1.10.3 + kubectl_rc=0 + sleep 10 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 47s v1.10.3 node02 Ready 21s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33411/kubevirt/virt-controller:devel Untagged: localhost:33411/kubevirt/virt-controller@sha256:cda8ae97dd1ef28675a64ac8024882682d401d53ae91c326f05709937772be90 Deleted: sha256:130b9102e829878dd4ee296249b9e557b39d4ae34fa90a6085cb889573038670 Deleted: sha256:38b10dc40a1dc3dad4321dc6f3fb7a62055f5734269c108718ee93fc0ba9c690 Deleted: sha256:e4ff49b32d39663c3e87e837204af8b388cc33f1883053524e88b283b280282b Deleted: sha256:5a5d48cfb574ef1dd1275b9337801cab24a72f6f212916ffd69a611c98ed29b7 Untagged: localhost:33411/kubevirt/virt-launcher:devel Untagged: localhost:33411/kubevirt/virt-launcher@sha256:652951681732cb9052c8c2e9387d5e88c0c8ec0b91e197b124ee6fc4c1a54697 Deleted: sha256:f718dcefd7674fd40bd78d0279ff91bfbfa050c5f43711850bac470ec8c5bcfc Deleted: sha256:fa169cab630c4b1acee6e006e0586cab1c3988f4dc281ec42edb90acac1080ab Deleted: sha256:e94411192439658d8f237ee0f7f1932e107c54fc42598e95aa3a1b07801d2e6a Deleted: sha256:dd9ab29ee8199a0affdfe4fc2e99d6334db06d2aadd0a055397bb80dae4f187c Deleted: sha256:7399e5986cbcc4d8ba8be0c06dfe6b4fd424e834eae46e089fd0a62876feb830 Deleted: sha256:886dea5921e170cd26d4073609ec470be889b65311d21ca9ab77b24153d44c91 Deleted: sha256:4532e6417b8055e192588b1053eb89a825e055c6209a0ff5684027859f2fd013 Deleted: sha256:47022469ac32e2ff6fc8672c229935c694dc4e03455d690e16dd01adb4369c61 Deleted: sha256:42e1bce5d8328b192415f7724b84b027c7a62364eb6e7967745d4434783c60f1 Deleted: sha256:5226631fc876d71ef27c32d276a3130f3abc69971f2a390f2fa9affd4ec0820e Deleted: sha256:169a8d77ff91a2bd01e6432103b6d57f01d89336c12aeedc405a0298ab3a5051 Deleted: sha256:7f30014da2f1f03949c5a2f6dcd8939204729c80df29a2da61ad48124f3d390b Untagged: localhost:33411/kubevirt/virt-handler:devel Untagged: localhost:33411/kubevirt/virt-handler@sha256:b3f417e895fad0fcd4e5a2bd9abef807eeba912aeba68e2c6961ac1318c582b6 Deleted: sha256:d8c9e0cd51f4717214ccc5460514c873c9aa189a5321769b67b06f4c12c201f8 Deleted: sha256:cbcd45f0f5f059f845dc8cae365b90276b235e278fd9809a3560c23388e40dc9 Deleted: sha256:ad77d3fa41eacd91a4f7179fd84dbef546011088eb58bfb63c24c4fbb012bf21 Deleted: sha256:25d7565752d4e8aa852750a4dacf816dff05f4af71842987b45fc4c855d026c3 Untagged: localhost:33411/kubevirt/virt-api:devel Untagged: localhost:33411/kubevirt/virt-api@sha256:4158a52d5ed223babf2a013ba73b15c48a0c4513a87b830b9339c97a739d0499 Deleted: sha256:71c61fd5419378e6d7b250162e9857b127b3ccb95ecb3aa0f00ce637e7ce9aee Deleted: sha256:7f7378ab0924ef12b742e233c559805bf87ffdec8265c22de16b20e3929e587c Deleted: sha256:1ec53f99d4974c4ae163222e65a4407eb3f1d127d31d9073310cbde33dd1d764 Deleted: sha256:f5e92fdee1312ed711adcd3002b458c9e54e262e4902c403774a6efdd5fe4256 Untagged: localhost:33411/kubevirt/subresource-access-test:devel Untagged: localhost:33411/kubevirt/subresource-access-test@sha256:848055c955daaff2e2ea9d0abbbeaf4a9088a51ae06eb9941381b3ee5d928a27 Deleted: sha256:cd14d94cbf9677a25f793c195f4a114eefda389b35f43af253df34b343641d56 Deleted: sha256:95211a7c207d4f0e7e5e135d9ecd67141846cf73a58a24960e470aaa13647492 Deleted: sha256:35857583a6fda0ab5e57b87779ff201915702c0d891dc67d10be96b5b9ec16bc Deleted: sha256:268eed4821020d7d1aefc6c584bb944c9c28d0fa872e546db49abddc1a0e6e8d hack/dockerized: line 17: 5718 Terminated docker build . -q -t ${BUILDER} make: *** [cluster-build] Error 1 + make cluster-down ./cluster/down.sh