+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows + [[ windows =~ openshift-.* ]] + export PROVIDER=k8s-1.9.3 + PROVIDER=k8s-1.9.3 + export VAGRANT_NUM_NODES=1 + VAGRANT_NUM_NODES=1 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh WARNING: You're not using the default seccomp profile kubevirt-functional-tests-windows1-node01 2018/04/11 08:19:03 Waiting for host: 192.168.66.101:22 2018/04/11 08:19:06 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/11 08:19:18 Connected to tcp://192.168.66.101:22 [init] Using Kubernetes version: v1.9.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 31.505036 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --discovery-token-ca-cert-hash sha256:e6f01a33f36fe583c470029c941adce481d6c74228885a16342de190b9767150 clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created node "node01" untainted kubevirt-functional-tests-windows1-node02 2018/04/11 08:20:03 Waiting for host: 192.168.66.102:22 2018/04/11 08:20:06 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/04/11 08:20:18 Connected to tcp://192.168.66.102:22 [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. 2018/04/11 08:20:22 Waiting for host: 192.168.66.101:22 2018/04/11 08:20:22 Connected to tcp://192.168.66.101:22 Warning: Permanently added '[127.0.0.1]:33058' (ECDSA) to the list of known hosts. Warning: Permanently added '[127.0.0.1]:33058' (ECDSA) to the list of known hosts. Cluster "kubernetes" set. Cluster "kubernetes" set. ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep -v Ready + '[' -n '' ']' + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 37s v1.9.3 node02 NotReady 9s v1.9.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:32915/kubevirt/virt-controller:devel Untagged: localhost:32915/kubevirt/virt-controller@sha256:653504878091af5a95d35b22d000aec4094a84d48370bdc8102df00af10a02ed Untagged: localhost:32915/kubevirt/virt-launcher:devel Untagged: localhost:32915/kubevirt/virt-launcher@sha256:65ef552ae361f4cd8bdff77ddeb999f352fe7e11b3ae52f53d1b7617399b0930 Untagged: localhost:32915/kubevirt/virt-handler:devel Untagged: localhost:32915/kubevirt/virt-handler@sha256:8bbda83ae36abe2dc45877fa2e79afe4502c8b03a511040ccc7fd8601c33874d Untagged: localhost:32915/kubevirt/virt-api:devel Untagged: localhost:32915/kubevirt/virt-api@sha256:4674e7dbcd2cbdcce0c977b94cb620f536104fb808becdad9d6f177a9ba2a17e Untagged: localhost:32915/kubevirt/subresource-access-test:devel Untagged: localhost:32915/kubevirt/subresource-access-test@sha256:c5055795dd637cc6721e98b079e5428db1f0d68eeb6afb25fe2f395b0a798d40 sha256:032a0ee7c5d920298ff4bc9a196ad4a10de2620223c6572e183a9601c384ae98 go version go1.9.2 linux/amd64 rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready skipping directory . go version go1.9.2 linux/amd64 b319f2f33be4e127c265c18ac5501e766cf642dd6b2e644c869d3cd1e8fdc333 b319f2f33be4e127c265c18ac5501e766cf642dd6b2e644c869d3cd1e8fdc333 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && ./hack/build-go.sh install " sha256:032a0ee7c5d920298ff4bc9a196ad4a10de2620223c6572e183a9601c384ae98 go version go1.9.2 linux/amd64 skipping directory . go version go1.9.2 linux/amd64 Compiling tests... Failed to compile tests: # kubevirt.io/kubevirt/tests_test tests/tests_suite_test.go:34:5: undefined: ginkgo_reporters tests/tests_suite_test.go:35:34: undefined: ginkgo_reporters tests/tests_suite_test.go:37:5: undefined: ginkgo_reporters tests/tests_suite_test.go:38:33: undefined: ginkgo_reporters f60dc0d2a7db9244cd382c093a486d2fdf4bd2acc03f7950901e70848376d22e f60dc0d2a7db9244cd382c093a486d2fdf4bd2acc03f7950901e70848376d22e make[1]: *** [build] Error 1 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows/go/src/kubevirt.io/kubevirt' make: *** [cluster-build] Error 2 + make cluster-down ./cluster/down.sh f7efa2218be4 eae4397d00ff b6e45427855c e2a3a2f69a8a 8dac9d06df5b f7efa2218be4 eae4397d00ff b6e45427855c e2a3a2f69a8a 8dac9d06df5b kubevirt-functional-tests-windows1-node01 kubevirt-functional-tests-windows1-node02