+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/23 10:13:33 Waiting for host: 192.168.66.101:22 2018/07/23 10:13:36 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/23 10:13:44 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/23 10:13:49 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/07/23 10:13:54 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 22.011452 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:f4528ade078970e49ebc27a1906c02abea61e8205c413fbb1990dd414ce5064f + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/23 10:14:34 Waiting for host: 192.168.66.102:22 2018/07/23 10:14:38 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/23 10:14:50 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 37s v1.10.3 node02 NotReady 11s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n 'node02 NotReady 11s v1.10.3' ']' + echo 'Waiting for all nodes to become ready ...' Waiting for all nodes to become ready ... + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 37s v1.10.3 node02 NotReady 11s v1.10.3 + kubectl_rc=0 + sleep 10 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 48s v1.10.3 node02 Ready 22s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33697/kubevirt/virt-controller:devel Untagged: localhost:33697/kubevirt/virt-controller@sha256:46881c6076640d6388a3d4de56f303994703eacc1cabc04d85dd008e23d449cb Deleted: sha256:7dc06476ab8424f57e9dcc73930bae5eb85780fee1ac8c925186b4c9738dbaae Deleted: sha256:689c896f9e6637c37ccf0bcafb1dceaab7ac414842ab68459f6e72e955154fd2 Deleted: sha256:dba40950698986eac579cd4210ec8b98d29aebd831ab02299fc8c478d9150f6e Deleted: sha256:c39a860c9908858655ea61e7113a9615c757dbbaa031e402ac575cf53c044f51 Untagged: localhost:33697/kubevirt/virt-launcher:devel Untagged: localhost:33697/kubevirt/virt-launcher@sha256:134bf587df6b03ca10a3dcd8fe4da61c017beaa41bf08a828c82caed51953f8d Deleted: sha256:a8b7b974abd11470918f40ba64b530aae452cc2ee59750c4d57a64c330c93efb Deleted: sha256:b3276c10e1836087ed82a7ef300b764142e8b34b01f82a65897c70d98a84a0e7 Deleted: sha256:d3dfe4e960a20769d8c64c69576de7eefb53e804a225ffa480680e582fea1d0b Deleted: sha256:cd4d2f8145475f61dc26bc5e11ecf1e5cb9d9785a7fc55bb9abd8d68b3651803 Deleted: sha256:1b22495289f847f4d491653fe1db8e2281feba56ada79652094bdb2e03805c4c Deleted: sha256:bc5e1d165c198034a9b34ca214617f4553ce344df0c14ee263816970b9af7435 Deleted: sha256:0d48f933b849d6ddeb7160694db3d4b5baf100e8b95c9649cd95346d60857356 Deleted: sha256:e383da90b5ad91dc6795eb5904e8214d589266ddb865f164a46f1153b6b25f79 Deleted: sha256:a9a274324a1af9e86a8fdea40cc8def4f11a03ea6cb4bd2a0a27bbd83547f0d2 Deleted: sha256:1fa183db50c15650c0a7eaa5994a3beb4cfbc52b923ecd5850d915308974ec15 Deleted: sha256:e0a0369f0c01b152ab958720a2a55665e333dca0615e7401e821b197369f9790 Deleted: sha256:b00ce8588df7077238308b3ef9c305d795ebc501c128dc5e239c544cdc653a4c Untagged: localhost:33697/kubevirt/virt-handler:devel Untagged: localhost:33697/kubevirt/virt-handler@sha256:178cc8fa7d7cd90610b67b677954e6dcc74eb29dc6dbf0f5afa81c32cd6b3cd8 Deleted: sha256:f06b2b29980b038ce5afcab2d5d235c7eb462343f1978249bf8bf124a268f280 Deleted: sha256:87a07c6dc3dbfe796b2c33773d3a804f30e6e3f5f32ed39ac5dce33d21f4b00f Deleted: sha256:7a18f17bc285fa96f19583d76cb80c5afd25f46441903c92f4a1c3f61a4a75cb Deleted: sha256:4482b55c93b9ed01059da4cfba9ff455109c7dfa2e7c35ad7d1c6559cf36f4a0 Untagged: localhost:33697/kubevirt/virt-api:devel Untagged: localhost:33697/kubevirt/virt-api@sha256:8cf7701d5357f851c499ba21f5a4a2f0236b3fc96f2842b2944cb116238ab459 Deleted: sha256:23ed6fa2421dcf1b278dd0ba1d8a24e661a8b95759022afb47380b976fe8d096 Deleted: sha256:b19f4c0f1319f7ad775491cdd2c0070bc1f130942d3733a2044e928e698cf94c Deleted: sha256:4d760007cd2eb218f61b97de7e13ebaa706281a90ddbfc7b8871905339d3bbb2 Deleted: sha256:f250dea0691a250858c9648199f24fa6b0c5d108a0963fd285a2b49b65a9ca1d Untagged: localhost:33697/kubevirt/subresource-access-test:devel Untagged: localhost:33697/kubevirt/subresource-access-test@sha256:21ca3d1c15e30012419dc5e5029b5aa1bdeacfa2f00fbea6d53c7f2e5e8222b8 Deleted: sha256:f7e50d0ab3e2fab420ed44e2971b466ba802a279f3a2f38abd430c1abba67574 Deleted: sha256:cb4ba3cb34ebb6edbab0546f6cb9597e265f4be00be3f179bb589593f04b7990 Deleted: sha256:a654f43e80c9c0d98babd819775defd91d22e7774f54aafba5c0df217487acb0 Deleted: sha256:38c1558669562481a4be0a0b5dbe4c32fcbf2a8ea0c2234835a2cc8447560899 Untagged: localhost:33697/kubevirt/example-hook-sidecar:devel Untagged: localhost:33697/kubevirt/example-hook-sidecar@sha256:09c97f62e3825c127884ff6cf021f75c302859fea6e2656fed3f30b5dc191ddf Deleted: sha256:438e18fe8461feefb272acb4b293385c473f28f9c65960cade883c9b7b8241d4 Deleted: sha256:ca681d28e2ce09c6c564d289d11ee38470e470eaedca2e85b0136728d0f4843e Deleted: sha256:6b8d1892ccb67a7fd0d00ae80d1322eeb0fcc598c5ee22b616c2bf1c6f9009f8 Deleted: sha256:cdeb8c56411027ea34a98d37d03fa72e5c5be6f09c06373a0859942e518e48ea sha256:f9c79b1576e92cbd1766105e07c6f8f86d5dc58b8221d91f0c6f34fa7ab6e384 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:f9c79b1576e92cbd1766105e07c6f8f86d5dc58b8221d91f0c6f34fa7ab6e384 go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/tests_test tests/vmi_networking_test.go:445:4: undefined: waitUntilVMIReady make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh