+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev + [[ vagrant-dev =~ openshift-.* ]] + export PROVIDER=k8s-1.9.3 + PROVIDER=k8s-1.9.3 + export VAGRANT_NUM_NODES=1 + VAGRANT_NUM_NODES=1 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh WARNING: You're not using the default seccomp profile WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled kubevirt-functional-tests-vagrant-dev0-node01 2018/04/09 17:01:27 Waiting for host: 192.168.66.101:22 2018/04/09 17:01:30 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/09 17:01:38 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/09 17:01:43 Connected to tcp://192.168.66.101:22 [init] Using Kubernetes version: v1.9.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 28.004950 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --discovery-token-ca-cert-hash sha256:e8693244c290e116535a112ccddba6a359ae862a4df5e6ceac67ffbd0343493a clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created node "node01" untainted kubevirt-functional-tests-vagrant-dev0-node02 2018/04/09 17:02:24 Waiting for host: 192.168.66.102:22 2018/04/09 17:02:27 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/04/09 17:02:35 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/04/09 17:02:40 Connected to tcp://192.168.66.102:22 [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. 2018/04/09 17:02:43 Waiting for host: 192.168.66.101:22 2018/04/09 17:02:43 Connected to tcp://192.168.66.101:22 Warning: Permanently added '[127.0.0.1]:32966' (ECDSA) to the list of known hosts. Warning: Permanently added '[127.0.0.1]:32966' (ECDSA) to the list of known hosts. Cluster "kubernetes" set. Cluster "kubernetes" set. ++ kubectl get nodes --no-headers ++ grep -v Ready ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 NotReady master 31s v1.9.3 node02 NotReady 5s v1.9.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:32795/kubevirt/virt-controller:devel Untagged: localhost:32795/kubevirt/virt-controller@sha256:97ab3a3cbb299324fd928cb5b740cadc04134401d185574570d5a8f34cb5a30b Deleted: sha256:8c7c90ca345ed85f65e0c7a74337981d3e509d0f9942c9be9a4875bdc9b619e5 Deleted: sha256:75436115a19ff086f52b8b0b067d84b001ea8d7ba82fb36749d08fd74976d061 Deleted: sha256:23d91011658eaeadfedaac703f29316e8c670a72cc20388a28de87b226628924 Deleted: sha256:6c2678ff33901573012806fc96d48e0daa869fa74aaa8afaa774d3985e7c282a Untagged: localhost:32795/kubevirt/virt-launcher:devel Untagged: localhost:32795/kubevirt/virt-launcher@sha256:f78397650d5b4f0575f71ad6339f76773b850ff0445944a024fb4a4140e4e08e Deleted: sha256:20c5dad3006ced304ed7d0698cadd77cf996b880efdc1104cbcdbf0f58b5c659 Deleted: sha256:f84a6b9d3f78f0e7fbbcb06c0ceba9a5c00859248140df2bb457f540656527a4 Deleted: sha256:73e0c2631e12d8b68150411fa69ecd618ed6c195b0b44eed836478e2f609244b Deleted: sha256:4e47eb6f4a1d5cb52e45f4313839b0dd845a3b0959a572c72568509eaa650468 Deleted: sha256:c48b073c02e5e98a1c9faccd6f17ba864ad383d0229d6ba67d3d761a973902b8 Deleted: sha256:08febe7fdf52e94ffdd55d26fbe475dcdefb7fab915cf14ac274471acde9a626 Deleted: sha256:89508fb4415f7ba5aa9ce383e213875b025a8ef256bdcedc6df5fc2fd2667b03 Deleted: sha256:e32750fce1e083960709b3d60e72a03a87cf28ae6c55a01730396bae41268200 Deleted: sha256:3b1432c7daf2d62675b4273a5c4f57a514668a6e564696acb78aa01077d47285 Deleted: sha256:d7294148b9a85ae18cfca5f02b890579319763f6b17daed5eae9a35a9ec641fd Deleted: sha256:978c404009e179dfc8437cdfc502f55aa3750e4a9545b8676639e719641e7c1c Deleted: sha256:8b47ad130efcda5aa8520ef276060d2d4ed001979091ee091050b0e6dde5aef8 Deleted: sha256:576e0f44f2ffd1ea241fc5c7450f636977e24afd7743a24a5c38c86ffa543a18 Deleted: sha256:04107da10c0aa761265e4b6260db824edc0d88dffd248f65e4a7370843d34e69 Deleted: sha256:249711b072d4b5cd6b85150b4ce61fb9c04211622e733cb81e81b721469a7eca Deleted: sha256:75bb5d6e83d91cef6b3fe03bc3990364bca480412129b5a553280eb76fcc9046 Untagged: localhost:32795/kubevirt/virt-handler:devel Untagged: localhost:32795/kubevirt/virt-handler@sha256:e861c9de77ac9d7e701771280ab357c12128370612582e7514b77ec3164b1f56 Deleted: sha256:bf90c99a05d5e6c2e7a4ebf7c927c9c8f637bead594c262dfc6976f9b6a77e28 Deleted: sha256:ac8ef17dd634e6216b53ae64c12cf54d07174d8bc193de17a15a3fa9827441ce Deleted: sha256:0b06ed4f002b9444437bf760eeed0999943332ae4c7518a5b28e2cc3422d2b3e Deleted: sha256:deaefa10a1cd2777b6887656fb83dc553def1ae4867b71b2d999c308ae725375 Untagged: localhost:32795/kubevirt/virt-api:devel Untagged: localhost:32795/kubevirt/virt-api@sha256:9fa6c4d0e1c3708aca53e8c8e8154ff1518b0fb8c464c19f7e5cca55875533aa Deleted: sha256:4084ab4bd3921c3e176a3703a036fd073566a8f9a16e7bcb95a84c2bcfd47b36 Deleted: sha256:80965468947cece21ec36acbaff5d622c046b8818bf933d7d1a1390c694f6ab0 Deleted: sha256:fd5ba9f9709b474e28d97a8019c50bd703ff4d282f5af4d143185210d108250e Deleted: sha256:8f5aa4d6e32fb79ec8602d7e8b96a27723655a8731d4950d04297c12caeefef0 Untagged: localhost:32795/kubevirt/subresource-access-test:devel Untagged: localhost:32795/kubevirt/subresource-access-test@sha256:25fed2337b2605c0e5d5489d7702861857c14cf1bb3412943e0fcb14484261fb Deleted: sha256:927fd8a2654f31fe18eaa07dd98bca49f4d7deaedc135fd47ca1dbe5218ef428 Deleted: sha256:0a631ccb5dbf329611865f873c2f29ffa2b4b4bb04728027b82ea8cf144e59cf Deleted: sha256:d872ab366d955465d368250fe8988e65dfdb448bcbda8a211f3b59b29dc46afa Deleted: sha256:c0f6a2142d49efac5c75e74a2be52aff2b20c146ce81bc696f5b662daee52199 sha256:c9cd67dd05efdb07ffa24a86eeeca6419b8df7c6db7187fd750b9f83be5ac0b4 go version go1.9.2 linux/amd64 skipping directory . go version go1.9.2 linux/amd64