+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/08/10 13:08:37 Waiting for host: 192.168.66.101:22 2018/08/10 13:08:40 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/10 13:08:52 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 30.504799 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:2d9d5c74e6e32bc57978ad13b570963593daa0f0698e2b4bb3c6fad179dfc920 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/08/10 13:09:38 Waiting for host: 192.168.66.102:22 2018/08/10 13:09:41 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/10 13:09:49 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/10 13:09:54 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 48s v1.10.3 node02 Ready 17s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 50s v1.10.3 node02 Ready 19s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33605/kubevirt/virt-controller:devel Untagged: localhost:33605/kubevirt/virt-controller@sha256:7ace1747cc522b3c6e0cfe95171fe1d8a7bbac87b2ab20ddb02211e7706ae848 Deleted: sha256:2a3f10bf2a42dd04d68c411dec9d40e057bfe1bba1e20a312a82990854096772 Deleted: sha256:03ceb732d6598b079e3d2a41de6a799b0280cb632489888bc45cd097a6cc1ecc Deleted: sha256:f0ed108b4047bdcb0fb2a175f8970cdd9bdf94990e43134cc71c06cedc6b594f Deleted: sha256:20e3144f1bc0fc540268cc0880cf07a2315581866d97e12bf46a9fb88e1e9126 Untagged: localhost:33605/kubevirt/virt-launcher:devel Untagged: localhost:33605/kubevirt/virt-launcher@sha256:f88c619c9859c20a9d652e88e0ec5d091a9b78f5e1149b634c068d58c570040c Deleted: sha256:cf637e84860d5115154c64a6c811f3ecef7c3f9cb64c6246e7e1a19ba3c35c82 Deleted: sha256:4c34952fd36a1bc554a4b8412aa044d266ddb1f801af617b41f51ab29176a96b Deleted: sha256:1e7163b6b6c2a6c6da48d2e5d0561746842e85dc545e88569f0ffef755260799 Deleted: sha256:900020689b0cbdccabe69642fd1441a654fcfa27b2ec04faff76be6f5ab2e702 Deleted: sha256:63a0fba4196482eea22e9e0777f97a21d07d55f1d965ab048f49f8ed0ef23c49 Deleted: sha256:8cc8a037d516ecc92e71ab1339d2580c78b81defd7dc07d6ec94a6cecca939c9 Deleted: sha256:3df5ea0be58a49c4f1fa0b999601cbee9e885425a0874691c2a85b021b5e5c4d Deleted: sha256:a98ea73a1458e36f04fc9474f7d0e0e192ef58533ec180ba63c92eba7fcac9f9 Deleted: sha256:8e48fe1944ae3be6b5b9f65a0dc2739b981f0954b4c9d1a0051106b52c02c7c0 Deleted: sha256:1a736c79913a7ef0f7481d2f17c5f1a146453c8cd547ae5af44cea92649a13ff Deleted: sha256:eaa8fa21bf491942bdd1d2d8b0674e1e4a608de7af60616f34cb57af4752c121 Deleted: sha256:ac1168b8125678341893f356faf3d48372f572044ad3ed79eebe654ecf4cdf63 Untagged: localhost:33605/kubevirt/virt-handler:devel Untagged: localhost:33605/kubevirt/virt-handler@sha256:f6bdc486a94c4d82408000751996ba26b47be19053438d62c21f6585de16bfae Deleted: sha256:bf14e0858b7633affb3a8437559b911e36db47503ed146d2ddbe914c0e2edf48 Deleted: sha256:c0f0031ddbcef667c4157a7bdbfce849ca39764ea06d84476daebceebab11662 Deleted: sha256:90115fbcf16d6a0be2aa2c8caf35a3701d1e7970b7c8e3941fc380e65f9bd3f1 Deleted: sha256:484081088ce5a7541d30ff078378798e954fc488b97d4eef797c70af01167438 Untagged: localhost:33605/kubevirt/virt-api:devel Untagged: localhost:33605/kubevirt/virt-api@sha256:a858cd35e5cf07b0fa517dc87318a5f4f764969908bddfcfbb95af5a4c05aad8 Deleted: sha256:311e64ee0c413b269f05717a68f981db618c818c0bdbdddff5c60154c8c111e7 Deleted: sha256:9c5bf34878d90df73b18b9a8b933be2668a1c510ad8130b5318aeb036c104a55 Deleted: sha256:260a2203939bbb309bc9fe27f296efbbb6a0b3edbb23c33c1fd218b2d9d3e526 Deleted: sha256:2cf4e5f356f1d433beb47a411dad075a5d2f5d41b9663528193c110d5115b0fd Untagged: localhost:33605/kubevirt/subresource-access-test:devel Untagged: localhost:33605/kubevirt/subresource-access-test@sha256:a41cc02612b0ed3ace684370f6069ee5d4826ed1e7d38d9b34a401c5055f4ee2 Deleted: sha256:ac94e425316c6dd06506e6d728f28fd25795bc52f2554e56eb126a5fffe8e1db Deleted: sha256:a8117b4a87ef1982ddcfd7470bd4d8ee5536e23bc977d8ade739d0e10cfb39a2 Deleted: sha256:b2b653f6ec6193bb8b5d6d24424a3eb80b1aaf90359dedfb2f149faf061e917b Deleted: sha256:ef4311ea6ac70992b466b21b8d10e0c91cc627d667abc796c46cd8c8cf75e06a sha256:0bec083f1c9e66baa107940ad909778a562a54db14b6a64ab117f1897e0697dd go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:0bec083f1c9e66baa107940ad909778a562a54db14b6a64ab117f1897e0697dd go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory