+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/12 08:59:35 Waiting for host: 192.168.66.101:22 2018/07/12 08:59:38 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/12 08:59:50 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 27.003993 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:cadeb729ea9caab64c63cbefbdc1e6dd542ddc5d3778964876b9d76237b1a505 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/12 09:00:31 Waiting for host: 192.168.66.102:22 2018/07/12 09:00:34 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/12 09:00:46 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 39s v1.10.3 node02 Ready 16s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 40s v1.10.3 node02 Ready 17s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33819/kubevirt/virt-controller:devel Untagged: localhost:33819/kubevirt/virt-controller@sha256:e5b6881be9c2e0567c959059ba88a522417e83e37520c87f43790c660348a7cc Deleted: sha256:8354ab3146c30c5deeef8f6226057427546900eb24e4f13305a661a73dfc5807 Deleted: sha256:f0b0a24df11cdd5e839e267a89f703607314b020897fa8f07f19ed371634ce86 Deleted: sha256:9e251cdfaf67e319c7d441370a4fdeb78dd631161fb7b3be8ec54d9b389ae49e Deleted: sha256:f08f1633eb92da05d3578f9dea334677799fd7d91dab559f1b0c33994f12b110 Untagged: localhost:33819/kubevirt/virt-launcher:devel Untagged: localhost:33819/kubevirt/virt-launcher@sha256:a588ac7f258ea6ab70c85adfcdca5fb7541490a8ffcb3eb4a8c95ca5ff320674 Deleted: sha256:3668ac092007c37495edc0a02976d85b00e4a1e8b6c4f61c1a6c4c9aad5e2bbd Deleted: sha256:7cf51ad94d11e02d95dd1d4f7103c1cf85af311f9a93d4a418ceeace814b6a51 Deleted: sha256:3d950e4189fbeb2da1113a0a6445797ab30c67373a0bbe40a2fa1b98d6e21e46 Deleted: sha256:7e1de6f60d680e0e946b8e9874b9db45edce75e5e656a7037715eb0ed77f0bea Deleted: sha256:3a0dfa737218fe530888192c833d7ea4679189d89249425fcdfbc4b2508fb954 Deleted: sha256:9007b1fdb56c235c53a9f15d91a419ccaf92c5caa9dc76bfa15f0254e9522329 Deleted: sha256:d337b19fad50a88e48ccb240da5fd309a3b97769f9f2ee7aae9d97efc812135c Deleted: sha256:9d067d9458a416693ac0facf03a18de497f2760a0dbedeac2ff66de4132f6a6f Deleted: sha256:93979b7e481790490939700ad631d17b40cd537f0a430350cb882d8b2c895969 Deleted: sha256:8067eb343ee700d2fff47ed4e4c6b06519317ff5df8f5625b5e8a6306e4dc9a5 Deleted: sha256:a45fcaf4fcfb1d57f8fac0729d1dd777b011b31ca5b3f78d58e6b988c20f4d2b Deleted: sha256:312fbd7487f7809241944b1cd374e32d5c25ed3c0415128bff92cb065ee28991 Untagged: localhost:33819/kubevirt/virt-handler:devel Untagged: localhost:33819/kubevirt/virt-handler@sha256:ee7b5e6ed6889a2d66a1929724a99ae8cff3c99930ddf0ad2fd76c6202217160 Deleted: sha256:cba0a64609dc15a25790de475a08ebfea5b21c74fe08ef53f62f3991dfa7509b Deleted: sha256:c9120787d17014dd645d151ca90693f177dbb2788eded9082d0447bbd1e52e7d Deleted: sha256:cfb77999dc9382d5140706081e6166709047452a6097b7e444abcb7a86015972 Deleted: sha256:f86f093879316563bf6aca2662516da802dd5b1e2ee8b513e269fe6254f90bb9 Untagged: localhost:33819/kubevirt/virt-api:devel Untagged: localhost:33819/kubevirt/virt-api@sha256:89d61a0d3c09d0a89321d2b8389898edf47a73846031940b46d2d92f327124ec Deleted: sha256:992f4cff7c39c812db8bdfa8eeedde14c55fce2f93c9929ea0991e56f946b088 Deleted: sha256:91c7b5a3f26341fbf66ebbbad01666e4258777cb8822fedf892dea79dc1aa428 Deleted: sha256:68b122e9d0d2abe2dab5de6ef797e8719171f117daa7714f94823fdb7e848b07 Deleted: sha256:670d4de6279c41fc47bc1c4383cd8f3cb40d7242bcd23d2affbf4f36cf7dbe8f Untagged: localhost:33819/kubevirt/subresource-access-test:devel Untagged: localhost:33819/kubevirt/subresource-access-test@sha256:66a17828793f3172b9ee0abd38753b3eae5bf9de148d6a6c0498f0f444ae4b32 Deleted: sha256:6289b7d984b330e6867f9ff75ed05bbe8c1b42f1aee35557b5a4c62ec65fd1f7 Deleted: sha256:69bc64d2375273950fbf9ac027fd160a9fc512444557358b8808490790145cee Deleted: sha256:62a85767bb104ff54c8b3b4e18d149751fac6780b2a940c482bb1a10f0aa329f Deleted: sha256:9f10fc611311c1662e2e743dd2a12d6dd09e69283edf879ae779b1af2609c25f sha256:c3b4a1cfe3e82dfe174cd6868499201069fa30ec3f2fc8b23d7667c6ddec3272 go version go1.10 linux/amd64 go version go1.10 linux/amd64 diff -r /root/go/src/kubevirt.io/kubevirt/_out/manifests/dev/virt-controller.yaml /tmp/tmp.OySAlnsIUf//root/go/src/kubevirt.io/kubevirt/_out/templates/manifests/dev/virt-controller.yaml 74c74 < runAsNonRoot: true \ No newline at end of file --- > runAsNonRoot: true make: *** [cluster-build] Error 1 + make cluster-down ./cluster/down.sh