+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ............................................................................ 2018/07/06 17:55:30 Waiting for host: 192.168.66.101:22 2018/07/06 17:55:33 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/06 17:55:45 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 26.502584 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:f2020a9ed8542e13224611580df1e278d20ea49a6e220cfa536d75dbc2d1af6a + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/06 17:56:27 Waiting for host: 192.168.66.102:22 2018/07/06 17:56:30 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/06 17:56:38 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/07/06 17:56:43 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 33s v1.10.3 node02 Ready 12s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 33s v1.10.3 node02 Ready 12s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33789/kubevirt/virt-controller:devel Untagged: localhost:33789/kubevirt/virt-controller@sha256:ff58eff557262787cdf4b300765650055898eb9eb012bf5f53cede5b63d00572 Deleted: sha256:964d810727b2c453fc900dcb253bfefd208cba6b51e0bd0ce7adf57a5a5fe1ad Deleted: sha256:d53ba7333647a3f537acf74b035880d3485afea72ee5d69843529c47c5636483 Deleted: sha256:16807d191a96bf63e5f211a128b69845f8eccaefb41f9e3763bc21bba203829a Deleted: sha256:299afa359ca16bcee65a9d14f34646d84eaef7b16e5cb5b543392d5aa2312f3b Untagged: localhost:33789/kubevirt/virt-launcher:devel Untagged: localhost:33789/kubevirt/virt-launcher@sha256:a5ce511cb91776e90d756aad5143c847d0ba58ee20f37f3473f11bc9f4fba985 Deleted: sha256:a85ee77546375bc5b8f5268386f76ba22edf062f88fb238563991f896658c40a Deleted: sha256:984f0b56b5688d4811fbbfac0708b5d248306ea473ed6aa14987d6586c9caa57 Deleted: sha256:5b9e0b8c08c80f1defd84fde033603af24ddef92470200298af6bf2909371b62 Deleted: sha256:69ac4b1ac6d57aeba8fe26c44509ebc6ee96f76bb3bfc2cf1a38dc4e717f9c89 Deleted: sha256:cf5f9e87925e575dde3dd5dc0d13c752d92f826553add2a72f532a9f974155fe Deleted: sha256:90e1593025bd83b8aabd563d37f76bec0e89579f19a246868acfbae1d6313d9d Deleted: sha256:3f3859eee6c4db2a6b4dc8ce2187b3ab7f3774a76a915d7d78b1e9fb3be7fa1a Deleted: sha256:e18463e848e750ca05beef9ed8d2236eaa33a3e0ff6a322fc18fa370e1ae0724 Deleted: sha256:1265bb8e506e113433bf5d6424eccf832e5798d0c83a6e804e92c4a9728db6f6 Deleted: sha256:21b0e49f4243e0bf81b5b54a9f0d4f38d30b580d23e762c28092da8a53ace78d Deleted: sha256:69ea6641bdff3b5d45ec7b3d6e7c841e5f89791bc83cb0f68f97e3111804b6bd Deleted: sha256:cfcdb2d688f00c61ab88950972f94d6c6e23379b66276687caecc90b479cd985 Untagged: localhost:33789/kubevirt/virt-handler:devel Untagged: localhost:33789/kubevirt/virt-handler@sha256:a69c7c0de474d71849a2b8df34e59981fc90ab7ab3d984a09fd4ed2b929df421 Deleted: sha256:b097b917d3c9e084380a204a2204ea0bbd478ea16562de36577cb146842508ce Deleted: sha256:890ff8366428414b7ee51e2c1b3095d161fc6dfde5997072a66da7b9f21de1a2 Deleted: sha256:117256b2c91913bb4f4f263f665a8d80873f88fb4d3572ad3141fd20cd938342 Deleted: sha256:eb9dc8f76770a80e502b66b2400857823e374a117d22dc50fb97b6d1eaae288e Untagged: localhost:33789/kubevirt/virt-api:devel Untagged: localhost:33789/kubevirt/virt-api@sha256:1227d1665953989dd5e0ecd57056ab26e77461ebe5d634d55da7d04e469f70d3 Deleted: sha256:fc2dd2153e87681cc2086f581d670a13aea2d3697af88794f466c26d8412430f Deleted: sha256:3ea8413f65234a4f2c697e8e976c491fa1d5b9c4e1f8324dfa2fa18dd9b03704 Deleted: sha256:b91cc5e5a97a34ffa654cea095ac8ca4f7a19dc8487a2b045c9328308417d339 Deleted: sha256:1353a93b3383b28dcaba100f6a8799090995c4c8e1a17b6816405958f5cb9bf5 Untagged: localhost:33789/kubevirt/subresource-access-test:devel Untagged: localhost:33789/kubevirt/subresource-access-test@sha256:96ec94542100fc67cd62bb946cd3ff5cd4b317d88a69528f714f0638988d769b Deleted: sha256:b4af11d3fee2421dc25fa617c9ddd67be00535b8af0ea5944c7f2042e96fadca Deleted: sha256:1e9e93b85d38452619851ac1dcb64f77ac84c45e3fc2d7e9e0962e0168df18d7 Deleted: sha256:74c212dc3ba2d4edefbc6787ef8eb8fd1303e19d13375804ad3801162710ac52 Deleted: sha256:9ac820b02abfc79e71262e3e8a58b428714cf65c3cf5630d3ff75af8c0853ddc sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory