+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/06/29 14:57:37 Waiting for host: 192.168.66.101:22 2018/06/29 14:57:40 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/29 14:57:48 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/29 14:57:54 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/06/29 14:57:59 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 23.505999 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:7a65148a1fa898092cd66a4122302861bae1cfda0aa3b3c3bee55e89a212a28a + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/06/29 14:58:41 Waiting for host: 192.168.66.102:22 2018/06/29 14:58:44 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/29 14:58:56 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 34s v1.10.3 node02 NotReady 10s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n 'node02 NotReady 10s v1.10.3' ']' + echo 'Waiting for all nodes to become ready ...' Waiting for all nodes to become ready ... + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 35s v1.10.3 node02 Ready 11s v1.10.3 + kubectl_rc=0 + sleep 10 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 45s v1.10.3 node02 Ready 21s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:32986/kubevirt/virt-controller:devel Untagged: localhost:32986/kubevirt/virt-controller@sha256:c391690b6519e1da016073fd1b72b8d1417097c3b70b386765b5260be2072e45 Deleted: sha256:b3fe752bef0595cb38119e5a8b6b037af83e75e74e5c8d132f437edf1d9375eb Deleted: sha256:7b613248564cc1abe4b3875346062672d82dc5e32c49c881b63500503bdbb956 Deleted: sha256:ef975f49419f9eebface88e6123b5ea85410ec55175d693806dcef2d31db60c4 Deleted: sha256:ee5779f6b38a6d279953368e741aca0280a5e45d1b9266d29860d8cd106366f0 Untagged: localhost:32986/kubevirt/virt-launcher:devel Untagged: localhost:32986/kubevirt/virt-launcher@sha256:f62d8d3334bfd51727f278f97c068a0dbd59b3ead00effd09500c32bfb47f1be Deleted: sha256:fba269449b2706f65c14e1cf73bbcd4f3627f40e826510dd3e69436ac9176448 Deleted: sha256:8ff4b60e63eb64bb27f9c1f39af34906a8ed92213585b4e2326bbd78a9b1f824 Deleted: sha256:32310e7c87306a2fa89a0e08d29deb6dc77aeaa973ed5dc493b3e5b9070de871 Deleted: sha256:3591500d3d2ee6d70026c94f5d916508a330a12dfa4cb5fcf1c60fc514c2b46e Deleted: sha256:0392a7db11e33289006ced83525baf5531e234506d559166f419728d6bc6e525 Deleted: sha256:78b3e341b926f24ca413d57578b0cbdf46d2a83ec2984d92bc5e3a68ae952dc0 Deleted: sha256:c7f3ab2eee1c3914d33711c5326ed38e73b00adbf8cbe195318662beea7a4072 Deleted: sha256:f96571a540122a1231adcecbb3f89d3029f64ad71ea95e04e21b8a8504cab133 Deleted: sha256:7518204ec47ad9f7f314d8fd21944dc9f868382913a749790efdaa2b36340d6a Deleted: sha256:58869caed2c8584d66355459556b968e427d692a6373d825b0a2915e7bda76f8 Deleted: sha256:87139608d13746a3ba01f52a6a94e755d5242622475177e57a930bb381d8b07b Deleted: sha256:969ed45bd585c160d8455631de8c1b52655884bceb5c46dd65959f385a61e3a8 Untagged: localhost:32986/kubevirt/virt-handler:devel Untagged: localhost:32986/kubevirt/virt-handler@sha256:b4b1dc0e41a9d8f65fe350dff20594125840dfbfea863a8d8e92071cb83d3bba Deleted: sha256:6637973e1b468c640ea4f2d117f2e94e4d0c216f32ef3e0f097649577f10050d Deleted: sha256:dcd28beb8658f7a06f8012bb99666769f41489701b8632fd3da4b3f14839069d Deleted: sha256:ea68f2523a7847c2b106f7d2b36be5e767a64fa9a2435b8a119b2b4cb75e1d2d Deleted: sha256:8619e51e32f168236286bc9dc3c1cb2801ddc9106b2a03949cdc3c3781c9fa59 Untagged: localhost:32986/kubevirt/virt-api:devel Untagged: localhost:32986/kubevirt/virt-api@sha256:a2d3b93d97bf836b3f69828546944c9330dd525b81a0e2efd19037367cf1b346 Deleted: sha256:15b0c0e720e8c3108ea9eb090f0988ee360bca459823e39d407f1c68dde640b9 Deleted: sha256:1b0c401715cd7248f50b1bec865b80d2571dc302b21eeb90b67102a12c152276 Deleted: sha256:b117c6b0ad0e270acfd45b1c80f8f23a45061233423aeb68dcb8b2409018b77e Deleted: sha256:2c1db157a3a05bf6cad5a15f6ac18361ab41f65c2fa6c507ec111bc6784730a9 Untagged: localhost:32986/kubevirt/subresource-access-test:devel Untagged: localhost:32986/kubevirt/subresource-access-test@sha256:5f10ad57ce73572f4406608dfeee66ca9ab8baa0ea0016818553f06ae23c36a9 Deleted: sha256:d5940b71c0d4a501d9d377d3c23e9bd56aa2415cd9f3ef5f230a2c37940cd499 Deleted: sha256:b3af442eba326bd90d49dcce8e55ffd5b889e650981a1d0103c48564c0cb43a5 Deleted: sha256:df723674cb77c9b929bbce5be3c9695513a23156bdd9e5ecfc445051eab23f60 Deleted: sha256:90b6ede0c21bc653203b394f1524c7c75802cf3d1dca81f0a4c69664e539125d sha256:1575e945647247274535454c9f5e6a2856478def9205830e22dac12c068f40fd go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:1575e945647247274535454c9f5e6a2856478def9205830e22dac12c068f40fd go version go1.10 linux/amd64 go version go1.10 linux/amd64