+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/06/27 15:29:40 Waiting for host: 192.168.66.101:22 2018/06/27 15:29:43 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/27 15:29:51 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/27 15:29:57 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/06/27 15:30:02 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 26.004612 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:077e9fd63a00a4bdceb0b7c3d4742ee25684fe60f071719dafda5f20491b4fcd + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/06/27 15:30:42 Waiting for host: 192.168.66.102:22 2018/06/27 15:30:45 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/27 15:30:57 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 34s v1.10.3 node02 NotReady 9s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n 'node02 NotReady 9s v1.10.3' ']' + echo 'Waiting for all nodes to become ready ...' Waiting for all nodes to become ready ... + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 35s v1.10.3 node02 NotReady 10s v1.10.3 + kubectl_rc=0 + sleep 10 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 45s v1.10.3 node02 Ready 20s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:34093/kubevirt/virt-controller:devel Untagged: localhost:34093/kubevirt/virt-controller@sha256:1b741cd3467524924f0075c0a582ada7750080ffb41c79ae6e2b00c6fdf32317 Deleted: sha256:e3905c6fcf8ca8412292eeb531bb5c0e7a16482acf53fe46d1e3a70833dea272 Deleted: sha256:eb37f01ea89b88ebe439b3bbee9d87579ca98612c2b1523b94de67aa54bfc944 Deleted: sha256:46da50c5901eb7401676a47389755e658d76f6f47877ba864f18c3decc746857 Deleted: sha256:ca2095048a827cea3282405d6e3ccc0e07b8278508816bfd49b42cdf8eac2804 Untagged: localhost:34093/kubevirt/virt-launcher:devel Untagged: localhost:34093/kubevirt/virt-launcher@sha256:74daebe290e68b212b24c826db546e5e13b62dab50692a95882d8534420a8f58 Deleted: sha256:cd9f5aaeef137974897742d9c8c5befb2464ab6bafd455ca1fb6a0f3439f7c32 Deleted: sha256:c483dd4cf979542f6b55a446edd78b5a132abb850c5f35b27ba50c7a5a4d0bc2 Deleted: sha256:88e7531dd91e4547582298f82d4b94e5df30af92a5767cdd5d7f7abc1dfdba71 Deleted: sha256:415fcb279b99d7ea17ec3b9d7e353f041cb6493ed358a5bb57dd14e810fcd363 Deleted: sha256:8cec1b217f7894099b71bfe3b27641fa34184d7253738601631ec16970757c40 Deleted: sha256:4b8d3368900eda672efb168ee3720b3985bcc56c0bac9ebb94103d2f4b4449ea Deleted: sha256:5b3e56994c0bffa8285056551045cc9e43aacd47a9ab25d546e93044ff1ebb12 Deleted: sha256:ef515b7f1e9522f7d7e7870c790079de0019d67daf61f515485c3e017ba678a6 Deleted: sha256:f5b32ba64f6fdbbcb2a2d7a677975bd46912d85fa7e56a6d98bc59088a1ab023 Deleted: sha256:79b71debc5d15450118e08054f18a6af4d545371ba6e53c8e7b9aaf1aeb8b712 Deleted: sha256:28dbcbbceafaf2f53ddb86498a9e111061d3ba8f71cab13f3f2b2de31492dc6f Deleted: sha256:c9029869f2f664b4f89c61e0018c36ed579e3a5b435d2015e2b5937b193ee93b Deleted: sha256:9d807204064553784f9fb47f00ef1d786ece9c182e1114d1d61d30810df5ea67 Deleted: sha256:57cb87207d0f766d22e49c626c9b3cdc770ebba3db1c53cdc27879558ef44b69 Deleted: sha256:a62cbe37d0cddfa632e6ba011c0f840ae55c910c6013f7176ed946d2ffe409fb Deleted: sha256:e75785ca48f1de4e62f5a67961005c5b9a05bd98f79f828687a3c684a27a26b7 Deleted: sha256:63bd2718a09a7036f3a2793a857ab9437406e0b2bf548134fa39eaf79c9b2b6d Deleted: sha256:3ef2a987fb5ee51116f9b093cfc8ea90409be4b964caa7886806605f9de81e1f Deleted: sha256:031e6b0a2be6f7e48a27f6dbe1015e08dcfc70b16f54fabb28f43530418be7d0 Deleted: sha256:dadef4082f98d80545ac0573746b16ec1a4081671dd21e53af2843e5ea96b2e7 Deleted: sha256:6829155ca247baf049e7c0db2882b72bf7a86f27b5e5c419eca8685cd674549b Deleted: sha256:68bba7dc9b54811cf282dda0cf8274f64aed28ba548ec6590e6d320e6f36b7d6 Untagged: localhost:34093/kubevirt/virt-handler:devel Untagged: localhost:34093/kubevirt/virt-handler@sha256:4929dea52fd491f408aa319f67df7acfe4406b5c3d82c7355829cfb777bb7774 Deleted: sha256:abdec1af787439ad1d1c18d8c1c1b6de4f589f37d0fd8a315c3c4a3e87d329d0 Deleted: sha256:42a1ca42b97b30f0c2d5f4f1a7f9fabd76950f575d6965e74d232e45884fa136 Deleted: sha256:6e5f976ddd605feed95530e815a15b98eafdd1d76fc2a31623aa09417b8aa73b Deleted: sha256:8dc8c6318b584bb06a8fa5325ff1b6550bae18ddb6edb19b792695dfc642a35c Untagged: localhost:34093/kubevirt/virt-api:devel Untagged: localhost:34093/kubevirt/virt-api@sha256:e0c3d4765cb72512498ac64c6ef452d1cee58d8982d81ec71a50ccd9d942a930 Deleted: sha256:0e9620318a77eda6790c6199ed545f17a0551a70ad73d30a6787607e56ec2349 Deleted: sha256:f99383a40042696fa63f2877c7cb386e84c0bdcdf2db95b7da1a5d3120a5e510 Deleted: sha256:a48e53b2198849e6a98a3a7bf04b9eb6c3ee7a5845dc3875e8aa76aae76b7740 Deleted: sha256:ca0336e243bf7b774bb274747ac9b69a907883431a653c6826aae6f69acc0ad7 Untagged: localhost:34093/kubevirt/subresource-access-test:devel Untagged: localhost:34093/kubevirt/subresource-access-test@sha256:ec831bab833f06b594aa8d97d513aa3d6e234d2eb53e831f03b4c86454fadd7e Deleted: sha256:c797699f2b3760dcc4c3db097406bf4aa45deb44bad0ba06d5a704cb8292f1df Deleted: sha256:7441b5a244f7dcff9c243dfa6973e255ac746d6559eb63db4508c975a218d76c Deleted: sha256:f8b56c4f2fed11ad307c81111265d0c8dd9a48b6c878bdcc4d3fa6963f776129 Deleted: sha256:ae47c2658d1acd2774c51f49cfdce928bf43c4f5b4d7847442f339650038a19f sha256:2df8b30e8f619e28e75e00ea9fa42c63f4f14b1c34fbb1223214102337507863 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:2df8b30e8f619e28e75e00ea9fa42c63f4f14b1c34fbb1223214102337507863 go version go1.10 linux/amd64 go version go1.10 linux/amd64