+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/23 10:14:03 Waiting for host: 192.168.66.101:22 2018/07/23 10:14:06 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/23 10:14:18 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 27.503890 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:634b5616188ee57ed4abaeef44ea2e83790487e5e262f87036ceae1f7d4f21d1 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/23 10:14:58 Waiting for host: 192.168.66.102:22 2018/07/23 10:15:01 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/23 10:15:13 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 32s v1.10.3 node02 NotReady 8s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n 'node02 NotReady 8s v1.10.3' ']' + echo 'Waiting for all nodes to become ready ...' Waiting for all nodes to become ready ... + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 32s v1.10.3 node02 NotReady 8s v1.10.3 + kubectl_rc=0 + sleep 10 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 43s v1.10.3 node02 Ready 19s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33392/kubevirt/virt-controller:devel Untagged: localhost:33392/kubevirt/virt-controller@sha256:aae8873649e5cea2e13656d679896c698642d875c4507bf107b474ae2aeca03e Deleted: sha256:12176bbda8062f6665707307982cf6167254ac559082708503cc6710706de96d Deleted: sha256:bea859e27123afcf9e0bb890fac4ab38f501c22365320808eb5c7c90fafe0738 Deleted: sha256:263e6c1ea11d3c558d46916af389709c530948c938b597629399daa7c98a54ec Deleted: sha256:ac98f11f7a9ad76cce5b2c58e7b4b2b31d81f84dffcd15df141718924b73ead8 Untagged: localhost:33392/kubevirt/virt-launcher:devel Untagged: localhost:33392/kubevirt/virt-launcher@sha256:8f70adaad385d475030d6b4cb514e171120fb7a225962e981d3714d7b0093057 Deleted: sha256:796879802214727edc936d858244111ef73f2d78f85030b177fbbe4ca29e9172 Deleted: sha256:ced0b7d3d8367ad2ddf10a7d54eaafc990a935cdf417c94058e0984a44546b76 Deleted: sha256:4f90ffab86b62b4986939a35945ad72820bf2430df5a10284a54a15816fcf9b4 Deleted: sha256:39fa35d459f0f11334ae72ae94021bb79e567e75792d89ba5803bab982a962e7 Deleted: sha256:1b3a75cfb6658cd74bd8e2353ae7d60c40be889e1c5f1be1a909d7926132ceca Deleted: sha256:0bdd83a90b78e1bd862c9f72b375155d3122a5d204315d36312d3d8a3fc8b7be Deleted: sha256:2e0e8f6b4efa36723a4e22baa199455a7a1a9f342c51bf528b6161aee5607be9 Deleted: sha256:6cd0b1d6e7440f419119ad0802d02cc5ec624bd869a827293143f040388d280a Deleted: sha256:2d6abe5898001ab4523b1e6329627efe3390b390b8773b5c91c161f957e7924c Deleted: sha256:84831603a9e362f91c74f3ca61a0470b6de6079b2c9cdb7fb9698bd7a9ffdc33 Deleted: sha256:860213c25b9988810355767ff0bc7d8d0aca0198762c304b6c1e4293e7b06855 Deleted: sha256:4726fb51be246294e76ab6ba453e150f7ca024388f12f755c6f9cd5e1e596092 Untagged: localhost:33392/kubevirt/virt-handler:devel Untagged: localhost:33392/kubevirt/virt-handler@sha256:91a04bdf6189ed75e302cb378f244afc893fa1994196e36282e35b59094dc782 Deleted: sha256:df378142925a4930fe68b32660240d21426a453d9df2c046d56c6697a393fac8 Deleted: sha256:268b52ffcef9939b7d1bc8851405caea0f49d9a2f8986755b53417cd3a16d8f2 Deleted: sha256:03a6f19f047ebcae326a1bf5dff4c9ae299bc3a2f64f5b50a2fbaad1e18502ce Deleted: sha256:5af78cfe921fbbe074e47a7a98312a92bf27fe5e4e7596b43c5feaf81e75b1f6 Untagged: localhost:33392/kubevirt/virt-api:devel Untagged: localhost:33392/kubevirt/virt-api@sha256:2674ba094d5f3827954aa46dfdb99639f4d3fb54727fff0920171c22259a4b86 Deleted: sha256:d9b4e086d5eefb6ac0b8bec0f1b1c1389db33cb2d6b35edac44dd472edfd6709 Deleted: sha256:405c2b5f813573991c6764f30fa38280463ff423372c505420c882d1d55aa01d Deleted: sha256:0e342313afaa50cf6e844648acb1f15573da5b2dcd1c01437f5adda88c31f640 Deleted: sha256:ac6463bc47aaad74ffc6dd84849a1776265e47d9ada9f3a5e62f7ea8e9c0041c Untagged: localhost:33392/kubevirt/subresource-access-test:devel Untagged: localhost:33392/kubevirt/subresource-access-test@sha256:333d9fdecbe25c955e59c6fb92d2b4e5fb355f1064338964f9d76c5d5fff7318 Deleted: sha256:883ecb16372e7641b967e1b639ec2841a957291899afe108432965d98be01b87 Deleted: sha256:6403cf039ff05091c9fc5244e1032f1b723006ba53b0734ebcf3d921de8c6b79 Deleted: sha256:20e93ecf7289d7977f5cf64229be581f992527526a7f8d7950bdb990ffc7fc85 Deleted: sha256:9e001e6a8dc8f5790523134a98ff01c4cc715a49db0c81e3c3a6a27a8657329b sha256:f9c79b1576e92cbd1766105e07c6f8f86d5dc58b8221d91f0c6f34fa7ab6e384 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:f9c79b1576e92cbd1766105e07c6f8f86d5dc58b8221d91f0c6f34fa7ab6e384 go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/tests_test tests/vmi_networking_test.go:445:4: undefined: waitUntilVMIReady make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh