+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2 + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2 + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/07/23 10:14:03 Waiting for host: 192.168.66.101:22 2018/07/23 10:14:06 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/23 10:14:18 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 30.004497 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:8a8c5d580d350287f4a3f76f7089de3973be0fc95f2d8c5bfedd02b743113171 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/23 10:15:03 Waiting for host: 192.168.66.102:22 2018/07/23 10:15:06 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/23 10:15:14 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/23 10:15:19 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/07/23 10:15:24 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 1m v1.10.3 node02 Ready 44s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 1m v1.10.3 node02 Ready 46s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33164/kubevirt/virt-controller:devel Untagged: localhost:33164/kubevirt/virt-controller@sha256:f7eac823d8b802a3fc6f2aece39756262515da424c9057e7862b7f8eddf1483a Deleted: sha256:5c694024f20fea1901c4ebff3d7ebccf04950339d50ebb98778cd965159cffe4 Deleted: sha256:ad4791e4b65a79fe856bee629201a9afffb95935a75604220c53b017c70f7229 Deleted: sha256:519a6bfb27093bc1bf25b8ef2d50d1ac1ec7454e845b183bc6736e5040b22f2c Deleted: sha256:45f7f97d5d97f76ded1c84769b0ea40780d8df1a580f9757144bdbfb8573f882 Untagged: localhost:33164/kubevirt/virt-launcher:devel Untagged: localhost:33164/kubevirt/virt-launcher@sha256:b4d7706ceae951dbfcb3e3177cad201e78ffcf6c613ac9080c2f2bf30977398e Deleted: sha256:a57b2a24175b4ce279b3d270b8f64ef0b8ca249efc250740110a1be315e8e2e3 Deleted: sha256:a11a828c7c3658f519cd36ea0341b35765131111e865198a1ef547a97fbae523 Deleted: sha256:4ac688582056ea7d536291410f38cffd5c4cba536a3b0c6044e1cc40747a4dcb Deleted: sha256:dde8e821f380a65cb8035776b6447308071d8f03256950dbf89edf248301ad84 Deleted: sha256:c97a84873fe99e85e65fecdaa4618b5a956fc8e8cc1bc475a7da7c398c0d4b2b Deleted: sha256:f553518f32e641fbdf23877b87b951610930675d2f13b1e75a83be13b764e5e0 Deleted: sha256:ac52c5b89d0ad9336ab423f3defd9c2318e2875ceca92d19c8a530f4649ef93e Deleted: sha256:d8d4ccc45efccd75e8536fba6ad4624a6e8983f70cd6144a35c3b74f44dd6fcb Deleted: sha256:852e4d580bb4c0da7990ed00bf4d09db307684dbdd67a8636e435cfeecbbaab9 Deleted: sha256:5b0f9511982e46ddce8497b2cb9d0e48cf46701d85cb8c0446f48c79d665d831 Deleted: sha256:255a4d5b78e04410db2e3a0cf436febad0642bb7bce9c150cd94e264072e0377 Deleted: sha256:490c145b62b3c34a6c331dd7ce715f7ad3265de9e2c05180a92e8bdb19a7dcfd Untagged: localhost:33164/kubevirt/virt-handler:devel Untagged: localhost:33164/kubevirt/virt-handler@sha256:831cc9942d58a152f3dd64b7536d81294be4c3993d00abe454a593a04664ee77 Deleted: sha256:a87487bd6b0b6179d820d6c91deb57a79b3ebe8c19d1501f698cabf28701afe5 Deleted: sha256:d81f28e8d880b807a9bd55844ea56e37a9a2c4f88b11d8ce8a8934b94667d72e Deleted: sha256:f489e06a6668c14e6f539742b2b1d4c7901f089947554fd4f1b1b41a4dc7662e Deleted: sha256:3d9e485c34f441bda36001f514c8151205fadddc33cb39037ed31eb502df7daf Untagged: localhost:33164/kubevirt/virt-api:devel Untagged: localhost:33164/kubevirt/virt-api@sha256:a8f64e1af69c4f68cfbcbe3b5bcd8b56f4ab6033e063ec66ff7f09fe351f7e76 Deleted: sha256:175a65d3df4b42d239284e75c0a6e0c2c2079478646c71cebdca188d7bed94f8 Deleted: sha256:68240969fb9f6aa284e803bc66a308cd34b317203cd82418d8c0ab6e159e6e71 Deleted: sha256:3fc6653a23ae4c00382c7432a52d90c58cde86b826291ab6da7a09e3d4b3de7f Deleted: sha256:a9411b9ac41567678f667277e4be9e8932a44251d9c8c825be0bbe3170588a76 Untagged: localhost:33164/kubevirt/subresource-access-test:devel Untagged: localhost:33164/kubevirt/subresource-access-test@sha256:826f66db7efcff3ca0fa22c827c8cde2953cbb69b8a17a7afdc8739df6ff7ded Deleted: sha256:f1d331495686685118b9efce9cc4c18672c9bd08c41d4be7545e780db0dc21e3 Deleted: sha256:229e23412e3af58bd0253d776b80f98f26b24e35f465d575411c7459115cff54 Deleted: sha256:97359cf2095daebf71e9c316a09a0f7e8759ca3c0e6eb9de92e42910a103263c Deleted: sha256:8ce79b7965ca264957d17e9450b9a508ed9bc5c0100c2669765aa2e7728c07f2 Untagged: localhost:33164/kubevirt/example-hook-sidecar:devel Untagged: localhost:33164/kubevirt/example-hook-sidecar@sha256:f0bb3f08534545bd01bc1803ca23d0290cfa212d2fef1d3d3d14d2641e83886a Deleted: sha256:a02565d31586564c217f2712f34071681bf17fa0d3a2cce3742190f450a9d1b3 Deleted: sha256:a4e682e145a70668ce7152095cc09a59e3cc9b1c1692a11f820260c7ca3c6dc4 Deleted: sha256:3b5200328ae452f3bb960f86b8527f29146a3238448654ff96b8b3dfe6e3d025 Deleted: sha256:3c20e46b562b5fea69862589327ef4cd7b4d362f18befa527a1c78ba4e1a1ab8 sha256:8314c812ee3200233db076e79036b39759d30fc3dd7fe921b323b19b14306fd6 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:8314c812ee3200233db076e79036b39759d30fc3dd7fe921b323b19b14306fd6 go version go1.10 linux/amd64 go version go1.10 linux/amd64 # kubevirt.io/kubevirt/pkg/virt-handler/health pkg/virt-handler/health/health.go:71: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/apis/meta/v1.Time composite literal uses unkeyed fields make[1]: *** [build] Error 2 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh