+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/07/15 07:47:59 Waiting for host: 192.168.66.101:22 2018/07/15 07:48:02 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/15 07:48:10 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/15 07:48:15 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/07/15 07:48:20 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 23.007006 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:c53348331997824f7c91746c70bdfe05eb45c34237b42bc281576635e54ebd2f + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/15 07:49:01 Waiting for host: 192.168.66.102:22 2018/07/15 07:49:04 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/15 07:49:13 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/15 07:49:18 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 46s v1.10.3 node02 Ready 18s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 47s v1.10.3 node02 Ready 19s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33588/kubevirt/virt-controller:devel Untagged: localhost:33588/kubevirt/virt-controller@sha256:8f073078dc39fb9e69eb29ba53ca2b8b477458b4d4e78b21454ca05438eb1c91 Deleted: sha256:3ba574ef030871990e195e257d77d75a08529f3ff612e0ca68bbe305f2f00e23 Deleted: sha256:bde63a1b149f3c4d29108907f1aafd99962c1ecebf95dba35155faa854272e41 Deleted: sha256:883a877ae861a19d7eaa7e7b51177c3e0da70893743ebd4ced62b5c3c291a7e5 Deleted: sha256:fe3a4a1d3a97331308e4a0381695480113c606fbb6acd71e31d0533d8f37de12 Untagged: localhost:33588/kubevirt/virt-launcher:devel Untagged: localhost:33588/kubevirt/virt-launcher@sha256:9f49cd809a831af2805b3abf5b8d77d15b69a9b1992a139a4f834ae737bef4b0 Deleted: sha256:ccb408e35b85237765079c05770e9ea7b63a67b9307f9d7b99dff1c4c2b2b9b2 Deleted: sha256:717a30bd78ff0b8c2c1a0a348a7f8de01339f2ee664f7d8d8bd2b3f7ff050055 Deleted: sha256:82ebed982abd16c0976d9f164f016d7395cb6341d02f0aeb35c16a4ec7312207 Deleted: sha256:ef0d1e9cbdff0a49493520dc4e7b59bbe47ff68ee7c867293ba921e1f9da57e7 Deleted: sha256:3673259b75c8bdf02de9bcd26504772287f5ee43a17bcdc2ba6a14cc80d8befb Deleted: sha256:9059daec1189fa3d1d2901823f894add87bd1a5edc754262cb4c99b76038cbac Deleted: sha256:1dd50be34eeef58c6f678c848c8fb64a7dd7bbad105bc9c065553524dace42a4 Deleted: sha256:e8f91be7440972a90f72cccdbcdf127b47b610573df03edda2e3e187e338994e Deleted: sha256:e132aa812b9604c532785cc2b7f1603a75a2c24fddcc7b9bd9bc0172408d9dea Deleted: sha256:0a18a9678e13a79c945148d3825a9d6b39bbd161d03890457f76f9c5073f64bd Deleted: sha256:eb1c09955552c029c3f50611093b91a431d8e82172172e7777993de8b67785e8 Deleted: sha256:6f2702e9df67fcc79d39e5ec2c4f3613af74b35f1dcaa3ebab98ebdf826ad1e0 Untagged: localhost:33588/kubevirt/virt-handler:devel Untagged: localhost:33588/kubevirt/virt-handler@sha256:0c64cf8a36a25abe2230c553d5d1878a9f423868092cbaefce33b5d94ae02df3 Deleted: sha256:1a13fce15ca91c93ecb47768927815a3806b10008ef28285bf0ab224ef927684 Deleted: sha256:dc9066ef64ee5ef831d33002d8b42739824583da7d7d83469b42b7190c5212cf Deleted: sha256:ed094ddddc38dbea6528b254941ca74d3e72f0ddf0ac849ef1729ee31a9c5c36 Deleted: sha256:1dc1ed4fac0d356613dddb2f2024a0c903b12c4c60b07189b7e5047a8bbd2f34 Untagged: localhost:33588/kubevirt/virt-api:devel Untagged: localhost:33588/kubevirt/virt-api@sha256:c1c79a9afeadf705af2afd10e804aae552dd2eb9679aceaed6f925cd924824b8 Deleted: sha256:db50850b5403154a5f4cb82cf9b89da587cd6dba630d34b9295591cc3ee70e8c Deleted: sha256:6273b009584aa8b35b620e12e22beb4f5a8122da37c0867cbbf3029aa9d66d7c Deleted: sha256:6095c09d167e15336199cb417f6ce11ab4f29dfc2cc071b7f4da5c07b91e4d88 Deleted: sha256:d7f490f2dd40fefa44a62b7d62e7ad5514b87fb8bf445a54477cc5adb1ae919e Untagged: localhost:33588/kubevirt/subresource-access-test:devel Untagged: localhost:33588/kubevirt/subresource-access-test@sha256:8bdf235462793087a77499df039dd6c14c7ffdb19b3e50d6902f92f1ff80b1b5 Deleted: sha256:48f50e80e6bfc9b3f53fbbac08bc18264ddb8ddc1c41de8d8f8e53211fccdd40 Deleted: sha256:7c0d6ef2a2c73ab148fb14d98c21d4a812b0efba261d9da3abc96a848db58b14 Deleted: sha256:91d5413ad7a9014aec01cba3583215de378189b77c6d3b6873d5bae2c392b317 Deleted: sha256:805d8f34778546dac5e77a16bdf74aa2ba4097b07b47a61967339902179a64e3 hack/dockerized: line 17: 13061 Terminated docker build . -q -t ${BUILDER}