+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/06 17:59:27 Waiting for host: 192.168.66.101:22 2018/07/06 17:59:30 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/06 17:59:42 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 27.510787 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:dee5ef6d17cfb60f36019895ced6c02525f59e9530b46013ad0dd6d0c973b21a + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/06 18:00:25 Waiting for host: 192.168.66.102:22 2018/07/06 18:00:28 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/06 18:00:40 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 40s v1.10.3 node02 Ready 18s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 40s v1.10.3 node02 Ready 18s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33906/kubevirt/virt-controller:devel Untagged: localhost:33906/kubevirt/virt-controller@sha256:5ed4fcb4865241afe3bf3e4170dd2a0512f8695774ae6b4285b0ba19c236e2de Deleted: sha256:59a4600f8086a90cc3ef454daedc005c5259eb0398c5ad7dd6ae3e9c1bbafd14 Deleted: sha256:32f55074128f65a6f9a95d101adf1e6c31bc45323bcebaab74deb51db108956b Deleted: sha256:7c49fed54e10b5f0fcaf013c72479b8dd0cb86934db61c1824d41f13f804e042 Untagged: localhost:33906/kubevirt/virt-launcher:devel Untagged: localhost:33906/kubevirt/virt-launcher@sha256:8a5cbaf2694025ddfff5039ea1895193ed59525c5ead1eaf1a578192171ed888 Deleted: sha256:af530fdc0f8ace7b35b4230cf0c62f58a1c2b6dbc12ac7a4a58b5f5ab4faf80e Deleted: sha256:f9a4e48067a1c53870546348aa0612851cefba6d6585abafb95fc64a021184c8 Deleted: sha256:7a0ae5d7ef9c2a2c98345450c113a0af5ad8015a17ab0dad61f9626eb5d77d6c Deleted: sha256:eae7edf70a7b1abd331980b09b22c1ae41bd91933ac339df2d4d6c299e332f9e Deleted: sha256:cd77cc72384f21f4f8c9e5a92d197a4a1a830bb8d2d25f8f4241ba05a0b144ef Deleted: sha256:2053a83208be342072e2b9dca7d99c5ed543c84e4ea0bb264b6069389300439f Deleted: sha256:9f986fcf0474ed2a8646a968720c7c2da9faaff8d6db1ae636670c27f30d76c2 Deleted: sha256:d515a9cd1bc221d6ffece7af6a6440001b098ba3212cecf578a7e7e9b8103766 Deleted: sha256:e224e23de066f7c60e6b5546bc05c54a37e2a53086ccb8f1a1990dbd41c4e13d Deleted: sha256:fcff9d4355d865d7a1b5de66f6ab21ed42c808676f42fdb3708bd941e32ff686 Deleted: sha256:a70022babc3bd0f5b413f7e6544caa97c148a0d5ad1708e1256479a603b4a784 Untagged: localhost:33906/kubevirt/virt-handler:devel Untagged: localhost:33906/kubevirt/virt-handler@sha256:cc1cf67a0ac4af59ac28903ef70758f09505ca7b7e49e1533d20f0a37841bdbc Deleted: sha256:a1dd57259020f8bcc60b840d06149fa82cb32e0964e45567b82333dfb0b5af51 Deleted: sha256:19b4b5f04a5d7a1aae0cf3024df6dd6d700c70c0ee6a734af7021f56b3503d3e Deleted: sha256:26cd4b0391c13c5749bc7f00d18926f1cb879fa07acdd397b3b63e10f89bba68 Deleted: sha256:df6d02506091e323d1dabe1eb62afa8a3a1bba793091f162b80b906090f5ca4c Untagged: localhost:33906/kubevirt/virt-api:devel Untagged: localhost:33906/kubevirt/virt-api@sha256:159929c7f756843fa104bfff991f02b917969a0945b38441e9b34467bb7029d3 Deleted: sha256:a68b1f39b7d061fb4aa97a8aae035984a49dd278300262e52e6d4a4c1a78a1fd Deleted: sha256:a77c9d9f092ed5688977a69d17a192bf52ac025b041549b17ed86683180a60d0 Deleted: sha256:3df11873636a4effab97bfa6237dc05046c8d99b544e440f81eb36a04363e280 Deleted: sha256:aa386f5174335a5b00541288cd1f61ca2a2bda7d42c837611d462b0423971545 Untagged: localhost:33906/kubevirt/subresource-access-test:devel Untagged: localhost:33906/kubevirt/subresource-access-test@sha256:8e645ceb75a70ead586724f3a3ac6b7d9a41b362353bae628fe296aa72cc55b5 Deleted: sha256:2e200cb918a7f5728d5dd1a4a0ba27d484047313281b5d4d4056c3a6509e55bd Deleted: sha256:09368ebe02526b190271cf6f9035d0c705a08d7b99df8eec338b2e3fbbd01d6b Deleted: sha256:6efd2b56bdd3ab51732a6a3271821174841af2db59aa9e47c32826bdacbfad42 sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/tests_test tests/vmi_networking_test.go:406:4: undefined: waitUntilVMIReady tests/vmi_networking_test.go:419:4: undefined: waitUntilVMIReady tests/vmi_networking_test.go:432:4: undefined: waitUntilVMIReady make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh