+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/12 08:59:27 Waiting for host: 192.168.66.101:22 2018/07/12 08:59:30 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/12 08:59:38 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/12 08:59:43 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 29.002972 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:7e160b88e71aed4621a0c8793df5e38981b75cbb91430aa9005575521830b161 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/12 09:00:26 Waiting for host: 192.168.66.102:22 2018/07/12 09:00:29 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/12 09:00:41 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 42s v1.10.3 node02 Ready 17s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 42s v1.10.3 node02 Ready 17s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:34008/kubevirt/virt-controller:devel Untagged: localhost:34008/kubevirt/virt-controller@sha256:06da97df6b9ba36e34a81cb72bb36d24f21b36a174452655333f4f4992068ee9 Deleted: sha256:d1fd907f1117ffb41b042d86cbd6b09895a07fd8fee45dd4430c6f1dade8eb26 Deleted: sha256:ec6282ee5e99935dc77ed053100ce009875a5750053fbc01996faf7f57d2ff37 Deleted: sha256:af835c49a49e5cf52307294a4a9d9429c2b142e9cd671e125dc726c6c58e73b2 Untagged: localhost:34008/kubevirt/virt-launcher:devel Untagged: localhost:34008/kubevirt/virt-launcher@sha256:ff5d9c0a4cf5c2cff62241763cebbe9e0a5379f38201abdab6e716ed919b14ce Deleted: sha256:0122f6b071398172d0e0999052ab1b2bc9185ea71c5985a72c81f393ebb3c76f Deleted: sha256:ca828c0115107febe11b2c8d3a6528e24a5a3bf8c183c1934fecf226be72e1cc Deleted: sha256:75f90f6070b6f940e679b035fed2c5be8e472dc99e9017b78562902c5a9c1d54 Deleted: sha256:d4729947fdb407cf4cd9b56e09ca507b219a3b9d2f8fb7c017ba1f030f7f72aa Deleted: sha256:22fbe5b6bf6462fb28ace41f4b6927ac18f4be26a215733968734b295b66723c Deleted: sha256:f72104068bad728cbf20fcd4b8a72b5e667fcfd9e54f8e7d31ea74a384bb7774 Deleted: sha256:7974abbd4ad701da2db9f5958c751ec6d5cfa5655895543a75411deb8974eadb Deleted: sha256:ab3e5b8b029522c5f88b017832726ab5c4e34781e3425b47933d1a152030a1cd Deleted: sha256:32a9c707f017404ad2757da41ba68c756596e4f84983c50556a4380795f9dd3f Deleted: sha256:63fff62fd1fb082b2ccb4c89a3c4d9bb078f5a9a7bb1351d2ea8496d7cd45489 Deleted: sha256:4adae5acc5d11bb25d3841a38856ec7d3c2dd2c6a6fdd592ded52cdf278e121b Untagged: localhost:34008/kubevirt/virt-handler:devel Untagged: localhost:34008/kubevirt/virt-handler@sha256:452e2d6100dd23ce75f41b94c0c69ca26f0d54d1ca221beb2b53cc7e5150b86b Deleted: sha256:73bf2d332b5c7e98a00a713b5039ef0779a42153d2f6fcc1996c6fcab21ffef0 Deleted: sha256:2939e6fbba4beafe4474c0c26d95a21b373d5f49c476197520ff910dfcac50c8 Deleted: sha256:56a4131451e3cdadc92b48d21ca55b500b0de48f97e6129a7cb0bc6164e174ed Deleted: sha256:1c4ea6a128987bf0f1954fd703d6edb24911ff2ac588c7d70e484d5d232b96b7 Untagged: localhost:34008/kubevirt/virt-api:devel Untagged: localhost:34008/kubevirt/virt-api@sha256:b24d0fd8fed607eddafb713475a23cab5996d4073da69aa367b020ac5543e80e Deleted: sha256:5cb494341ec89b2f8763f589154da588ac2b0e5e1dbd1efd757e09580ea4ea2e Untagged: localhost:34008/kubevirt/subresource-access-test:devel Untagged: localhost:34008/kubevirt/subresource-access-test@sha256:42c61a4fd102fb919549e7b963a62f6b8ea0239b1e5baa848839662050d69068 Deleted: sha256:acbfce653d2d1405756670b7d30d37b2c3415e60efc63cfd9b74c4520994c4b2 Deleted: sha256:4dafb7306990164ca1a6edd773eecb8efc097088b3b2f8f8686f8879363d7502 Deleted: sha256:c5c4e41d7b5f8e0d8ab3775b1d6489f8f6c348e4c011510e76c635fa9908e97b sha256:c3b4a1cfe3e82dfe174cd6868499201069fa30ec3f2fc8b23d7667c6ddec3272 go version go1.10 linux/amd64 go version go1.10 linux/amd64 diff -r /root/go/src/kubevirt.io/kubevirt/_out/manifests/dev/virt-controller.yaml /tmp/tmp.a7bGKymCd9//root/go/src/kubevirt.io/kubevirt/_out/templates/manifests/dev/virt-controller.yaml 74c74 < runAsNonRoot: true \ No newline at end of file --- > runAsNonRoot: true make: *** [cluster-build] Error 1 + make cluster-down ./cluster/down.sh