+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/07 07:59:27 Waiting for host: 192.168.66.101:22 2018/07/07 07:59:30 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/07 07:59:42 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 26.003036 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:563a100664b56820a4d4c0c73c7fbac096e991b3cf554f0674e652b45474bda6 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/07 08:00:23 Waiting for host: 192.168.66.102:22 2018/07/07 08:00:26 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/07 08:00:38 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 40s v1.10.3 node02 Ready 14s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 49s v1.10.3 node02 Ready 23s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33805/kubevirt/virt-controller:devel Untagged: localhost:33805/kubevirt/virt-controller@sha256:fde1f4c4510c877dcccad051639baeaeaf5eb7c82d795393ca11b804694a199f Deleted: sha256:657c0228458dea0b192106d6159b008c14e1e1107cf09fd359ad9e2c5240b4fa Deleted: sha256:6c1fd1f90fc425a41f97237d391199272dcb51587de37fff20468caa4f86609b Deleted: sha256:c89b573c335a1b2fe0fab3445f3e1bf8a00127d539d3ad1a41fc26270e56142e Untagged: localhost:33805/kubevirt/virt-launcher:devel Untagged: localhost:33805/kubevirt/virt-launcher@sha256:0835f99072178d58bd7603bdef92b82bb5533010d71b4df8d107b72e8154a8cb Deleted: sha256:2027fc27a2b420953b53bd1d0f4c1f8cb8f20fe4392ed5d8bc2b623d6ad2c92b Deleted: sha256:0ea2fae07bb96af4f1995d3b9cbfbb57069d6ccf1a8ecda3767217626e6346ee Deleted: sha256:6358a6f3213c3c0fcdb7c0c7ffb63c5f05bb3e77f61fc6b26945e0b5f55adf8d Deleted: sha256:75c1156ae89e7c098b95d7f3b4941a522462aab4b98ebb2078952dc80428fc57 Deleted: sha256:b778b8cd7152c38d38bf96c03c5df0381f2ff6cf48052477e39a93278aa668e5 Deleted: sha256:46fb135dae2312161f88f2afc37f11a04ca575f8416869f144417cf3a9d3128b Deleted: sha256:92f7f5da08cdb62f8594efd452b60f9dfc01b7f56627ea65c0171091c2e6b8ef Deleted: sha256:386a36cf8a05044f9f62076ad74992dd181434c894723875f58b870bba4e8c81 Deleted: sha256:635a6c674046dae93d552c83132bfcf69d01ea7dbf167e225384aa76241ea62d Deleted: sha256:599577a422b85e821939e3898556027680379bb18bc3ab520d0202f756471e86 Deleted: sha256:c70caa5e7da1faec10b74f68d86acb4dbb8878f1f7fc3af2e49ff5579861bd09 Untagged: localhost:33805/kubevirt/virt-handler:devel Untagged: localhost:33805/kubevirt/virt-handler@sha256:55ebd7439556eb130647d2cd1d64d61159eb2317c5e5693fa47c28842e92cd63 Deleted: sha256:e9b32dd793dc8f63531d1788a2a2889e1736e43639f68c8033aa9e47b2316dff Deleted: sha256:1a55bbe74a291508aea5208c431603f21cd9d6ccd126ead2ee3a2d789b2f92be Untagged: localhost:33805/kubevirt/virt-api:devel Untagged: localhost:33805/kubevirt/virt-api@sha256:e250cec2b2b5534e51e088d8b7b18d81a45776f73e87505cfcea9a48f9a5bec0 Deleted: sha256:735be73fcfcb125b06db81d8592efaef6a15e6119895e396153857376c54feb5 Deleted: sha256:a26b63793637d217ddacd8d8dac3be75622ce0cb154707de77454776d1b7075a Deleted: sha256:f82092040cc47efca29d5c55f645a26677aac556671efa2725bfc04a2684d0af Deleted: sha256:6c9cb1e6790950fa5ee542248a7b4392f4fe39deda4ecdd8efd35ac04ced4122 Untagged: localhost:33805/kubevirt/subresource-access-test:devel Untagged: localhost:33805/kubevirt/subresource-access-test@sha256:9ef9972be33f34743f1d873cf864b324616193e742d43341ff5c50fe9c7e05fb Deleted: sha256:775466a98437f1c33959ff16405ddc37cdec7930085f2e0ba7f11b46cf7d784d Deleted: sha256:6e7597381914b92f2b37ad7ccaab1234c45f42ea4413f5afeb5528c8a8c35b84 Deleted: sha256:126a94e5f6b59fdb5e59289ab44f3d9544318b08af62a6b98c6cb4fcd7b4250d sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/pkg/virt-controller/services_test pkg/virt-controller/services/template_test.go:164:22: undefined: "kubevirt.io/kubevirt/pkg/api/v1".Toleration make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh