+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/06 17:59:31 Waiting for host: 192.168.66.101:22 2018/07/06 17:59:34 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/06 17:59:46 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 25.504525 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:e80f9cb10c97fad6f8878b6354b4b780b30f33d3dee720c35852e4a03df5c02b + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/06 18:00:27 Waiting for host: 192.168.66.102:22 2018/07/06 18:00:30 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/06 18:00:42 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 40s v1.10.3 node02 Ready 17s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 41s v1.10.3 node02 Ready 18s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33895/kubevirt/virt-controller:devel Untagged: localhost:33895/kubevirt/virt-controller@sha256:2b8225f79d6b736e207553c2633fc8ff1ba595cbb6d19dc5cfff4439f563cf08 Deleted: sha256:a9119ae69a11d79356eba2e074673658186e6af7363a7eb132a68b8a184e9ea9 Deleted: sha256:3704d612a1be53001d881e74f4a6cdec05b445412e4889f297d96af1bedb065c Deleted: sha256:ad9447249a5e44238e52e90f09a514ce4f8573d6d8f2fc8ad6ebbdce692a4272 Deleted: sha256:253f868919b2a72e92cefdd3f8c194afc4338f6e9a257899bcb2902d5485f16c Untagged: localhost:33895/kubevirt/virt-launcher:devel Untagged: localhost:33895/kubevirt/virt-launcher@sha256:d2cce45469d276cd85a91f3281d3b3a27fd3268283d709777cf8fca42c841431 Deleted: sha256:8a3883a701f85598dec3db0bdb9835abdb1a7dd33f7a09c4d04039588dd7a2b7 Deleted: sha256:cc5fec423c27c768157f0a227f9883e6d25b88b220e11bc42a023f0395c69b88 Deleted: sha256:57e5aa58e4723950b4ab9ee16fa2042684869f7526d29a94a6cf3a4c672ae400 Deleted: sha256:786dbc911ce76aeb980449dc16b28278c451d3bb3acf4cc2197d54bc8f0c56d1 Deleted: sha256:97ca16a129208d2a0292208ec5510c5fe0e1301c371213754b92dc0af29f3a4d Deleted: sha256:ccf47c2bb6373cf26d901816b438428b6eb3dbe88df5d254e0dc7c651372eca2 Deleted: sha256:bc775c8b45e9ca9d7828e97b07d3427471bb293643d9747c81229eda491f652b Deleted: sha256:f44eed4bb29974159fbe95ebfb6ff6026641a7376106ec81e2e0dc461ca234e3 Deleted: sha256:758a74917b664d6d239b923b38a38499b7621b98935b436e30804bab5c912d93 Deleted: sha256:9d8905fd3bf11eba0cfde1bb83de808aa482cbd5b5ae6b91ddb68b65d6500e3f Untagged: localhost:33895/kubevirt/virt-handler:devel Untagged: localhost:33895/kubevirt/virt-handler@sha256:adc28c28beff3fa15ffe824acfea0ed5c256d1ac98d3b076e52e2834207219e2 Deleted: sha256:e0e3b9d8f558f7490e01afa84d649ba022331a868891dbb3730c03663d3bb723 Deleted: sha256:62e9bc0bb31e792cadb10b1904278b12e79b988c9a7b5fa538e9c910facfbd0e Deleted: sha256:860be405184862be3fb0284e05fcc570de936342a8684241e720b516c9a88f91 Deleted: sha256:a0da78553c19230037c771eb7d72e5c680456048c0be26d78291292485032b80 Untagged: localhost:33895/kubevirt/virt-api:devel Untagged: localhost:33895/kubevirt/virt-api@sha256:d902e1ea60804776e306cf82aa356a2658c018a43f4c1bd69e610819e8767027 Deleted: sha256:ecc507612b48eb8af4a230ecf0986a47df32b21368a2bf208eefddd8fffdc3fe Deleted: sha256:47517b71e8f0d32a8c36d592d0778adef31989e7525fe90e07c984e6549add9c Deleted: sha256:dad0a22930b85319e09824e03c8b634b331ad4cd84e00bc458a9d2f3206adc50 Deleted: sha256:4ac0b39879ba39e26a9e8dc57c7241b835a26f9abaccbf8b319ded7edcd9578e Untagged: localhost:33895/kubevirt/subresource-access-test:devel Untagged: localhost:33895/kubevirt/subresource-access-test@sha256:628802ca19f187c2439b9c8ffe02cddf4be020ea440ff55959a111b036e7b52f Deleted: sha256:c5b7a90330c98ac26f3d87e7792b26623981f9c2e20685377be84a5846460ffa Deleted: sha256:3fd657dd770ab01842de92013dc04f92a21cd4a9a9e9e6b790b334c2c5166796 Deleted: sha256:c282efdf53aae300f01d0808c08fe1677ae06b6047f96ff4723dfae743c31243 Deleted: sha256:6b51f9645c06ed0f9968bcef4a93bf1788ca0eaf9fd6f2948be657302c5973ad sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/tests_test tests/vmi_networking_test.go:406:4: undefined: waitUntilVMIReady tests/vmi_networking_test.go:419:4: undefined: waitUntilVMIReady tests/vmi_networking_test.go:432:4: undefined: waitUntilVMIReady make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh