+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2 + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2 + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/07/03 14:38:46 Waiting for host: 192.168.66.101:22 2018/07/03 14:38:49 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/03 14:38:57 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/03 14:39:02 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 28.507587 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:f5c99eb72659d770e38b3ed3a9db2f3259fcbf119ed1907723533490b2cfa6d1 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/03 14:39:48 Waiting for host: 192.168.66.102:22 2018/07/03 14:39:51 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/03 14:39:59 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/03 14:40:04 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 47s v1.10.3 node02 Ready 18s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 48s v1.10.3 node02 Ready 19s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33179/kubevirt/virt-controller:devel Untagged: localhost:33179/kubevirt/virt-controller@sha256:9cb766529f3d5d9c0ef3a155f095d81fcaa0e571a002f19197699dcef3cf73fe Deleted: sha256:66cfaaf661f7f971faa855ce9a642b83f86564582c50319acea1cdc18b69bc82 Deleted: sha256:3e9b33dcaed9cee3b0cd6e02c93048b90fe7e597fe0acbb22d7db8401ae508a3 Deleted: sha256:c0a32b8b3b6522424746535fe208bd8a12eea1e713b23fac226a0f7762668330 Deleted: sha256:35d27585705daf42c0036867c92d0c258eb584488332cb63b2f1e59ece8015b6 Untagged: localhost:33179/kubevirt/virt-launcher:devel Untagged: localhost:33179/kubevirt/virt-launcher@sha256:1bba2e88ce7177cf202f1f2723cc91c04be3b84ce9631419049d8fbc7d2829a4 Deleted: sha256:025096f08ec87b9dcdc2e12cb11408509e85127515c67fd3a166a3982d0c6f8c Deleted: sha256:a63e727f97d668042781bab91f5b73081eb70148fd58de482eec12080ae05c86 Deleted: sha256:233780ccbf62dee2e9adbe6c5ceb95b2a3647d7b2fc8df3e2382b4338ea01c37 Deleted: sha256:b74a1955676a7be198c66a4ffb9869d55e4bfcb93a206a8fb7f343b69676d6b3 Deleted: sha256:a3333162ebc96330ca198d8de33a0c2a4a7498bed4e529d5b0836a4cd2e5500a Deleted: sha256:a57e823ffa97ef3c9a270b6b31be193fe201b91f086dbdb3fbf789e780684625 Deleted: sha256:6c07bc9e2b5ecb41ee2c667a2aa409934f7922257bea374b69f24453db721e09 Deleted: sha256:02dfdf57779625e1816b54b50b432d4aace57d76ffd6bc20f5981cc0811ba743 Deleted: sha256:f5548c7a750d3c7d02865c07311af91c44e5e45764c6fca2e6fd662fd11116ca Deleted: sha256:369407d4c018dd724a2a8275a751edeb4b272f009c0ca1a3a144ce79c1780d61 Deleted: sha256:9d9170db88fe5d76d47d4288c161eb846f071ce37442e4752ef4090aac0b28c6 Deleted: sha256:3ec8fe576bb76eeda4315faaec4e39f934a582e113188d2fc7e871990e45b522 Deleted: sha256:a1b38f8b580d5090eb7822bfba12b523bc427a26e5deaa7fd923798b62a6b43a Deleted: sha256:6438db4ad2782de2fbcd46173f52d1f899310624699328ec3b39f1d7231c5106 Deleted: sha256:4d1a4c4ed5bc193f6dc04d45a3e452afb8d7e9b108d461586bfd9b11aa41a0f9 Deleted: sha256:d2c465be269783c713dcc0a38b6f148f32d9e4be9015ee444098646f995e7410 Untagged: localhost:33179/kubevirt/virt-handler:devel Untagged: localhost:33179/kubevirt/virt-handler@sha256:7159b9689b91c8582ba6b2645d0fc2a80a27ee5272f76f27c8340cc445bed8a2 Deleted: sha256:4228762a9fee4d2778c663400c756e2ea548060482696550483fbb4158ed3e96 Deleted: sha256:67d04892a11e0865bde4a0c231b929b986a67e2875f277797c5593ad7253d674 Deleted: sha256:3de0b74a5e99a1af050b73677538b004e94833764cbf293a999115d79c657da2 Deleted: sha256:4ae12381cd276c206c363ca27a95ebc9ec9a8ba7dcc013b175aa157743f65580 Untagged: localhost:33179/kubevirt/virt-api:devel Untagged: localhost:33179/kubevirt/virt-api@sha256:314edb1e8b8a26b6853fb89ca792f6f42c53d65c8cbcaf44f22648dc38f6a49a Deleted: sha256:f08bc9384be7690019b0b55f44107346e4dc4cb81935e307d4da2e50253d2698 Deleted: sha256:d99c3a7e3780f6189ae36ac9941842b284fba75624e85a4522c74738ece543b5 Deleted: sha256:8d25f2647ee7df4b247258d4ac9824374989b7aa6cf3f188c9ec0799d34fd4df Deleted: sha256:04d40f325a77a32f028b1bf0abae10b40aacb47ce74492efedab85e216fc92c3 Untagged: localhost:33179/kubevirt/subresource-access-test:devel Untagged: localhost:33179/kubevirt/subresource-access-test@sha256:d8b7ab07e753be1ddf004502075daea504ef98c5f443ff2da13c8b1ac80a5d41 Deleted: sha256:e851be20cb6857d498929de4c7e071ef1ada2eba56bbcf29fb72ae838a5934d4 Deleted: sha256:6016cc70560611c9ebc1b60d873118b6d8c4bb3df7a679a063742f55ba636ce4 Deleted: sha256:9e343d2a4a44208c07ae319abd1cb9cc50cd894ed12f6e8429b56e47a330aba2 Deleted: sha256:f3cfd107ccce96b24e25c68c0598c41ac937a7ad25f87f0cf607f0340c3c1b14 Sending build context to Docker daemon 5.632 kB Step 1/11 : FROM fedora:28 ---> cc510acfcd70 Step 2/11 : ENV LIBVIRT_VERSION 4.2.0 ---> Running in aca37b2655ba ---> 5fb4fbe96f1f Removing intermediate container aca37b2655ba Step 3/11 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img && dnf -y clean all ---> Running in 6cc45ef1f898 Fedora 28 - x86_64 - Updates 789 kB/s | 18 MB 00:23 Fedora 28 - x86_64 1.2 MB/s | 60 MB 00:49 Last metadata expiration check: 0:00:31 ago on Tue Jul 3 14:41:47 2018. No match for argument: libvirt-devel-4.2.0 Error: Unable to find a match The command '/bin/sh -c dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img && dnf -y clean all' returned a non-zero code: 1 make: *** [cluster-build] Error 1 + make cluster-down ./cluster/down.sh