+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/07/20 12:35:31 Waiting for host: 192.168.66.101:22 2018/07/20 12:35:34 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/20 12:35:42 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/20 12:35:47 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 30.007982 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:e6fa42ce34db20d8c689a6b0624076c83c4452a05f2c0ba869e28faaa40335da + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/20 12:36:36 Waiting for host: 192.168.66.102:22 2018/07/20 12:36:39 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/20 12:36:47 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/20 12:36:52 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 51s v1.10.3 node02 Ready 20s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 52s v1.10.3 node02 Ready 21s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:34203/kubevirt/virt-controller:devel Untagged: localhost:34203/kubevirt/virt-controller@sha256:ca1701fca29c0898b569b375cbb049c082f8bf99511b8e9e04b37f8e0ce243de Deleted: sha256:209ff26ced4aa9bbc74bb635359b173b7541318b15a1ff33e750e0d52662445c Deleted: sha256:bc67738eca49c2b3f5f209d0ea57b2f469a1b1f2523c8873a1ce2f3675757ef9 Deleted: sha256:e510a340be20ddd734edf5ae77dcab720b2dd82bef233064a0de43c1d75dab67 Deleted: sha256:b9a195811301010bab476225718af0497a2f35a4219b1bcfed4a4a949b7c8be9 Untagged: localhost:34203/kubevirt/virt-launcher:devel Untagged: localhost:34203/kubevirt/virt-launcher@sha256:2cfb036a4c9b1ae81c4cbd4c10162cd8e621d39cbeefb36f29d158c685bb01ed Deleted: sha256:4d8d78e2300dc540040db54a69c0438e664d6bc09323d529e55718f2bd43094c Deleted: sha256:c881459c2a60951b455eee884cb9d5999a06173e000c96f578033615060aff04 Deleted: sha256:e69b3b3e53922b7d95df862e2dbce0319e18e0dfa53e92cb4103f7669fb51e54 Deleted: sha256:b5a60c1f3e78adbd079a84d61dbfa0c30a0a732dabb052c5ba1a5811f6fabb47 Deleted: sha256:e729c8ebff993e6a1ff224c22c45b2d4022be5f4694dacf3eb77f1741ea9fa3a Deleted: sha256:541ab1426070a02911641e36dd32c262463f5044d788e92c59ebedc4e16b48b3 Deleted: sha256:e339a10d6cc2b53579bde6f13f4663efdd8dad1e63b3772d0a98b3f77fd6c2ac Deleted: sha256:f1d3388d0e5d47a08480a02b3f4ef2f7d993a95455e43a448602b3fc42b3874b Deleted: sha256:b22f7ce84092c0f77ee8782638eb861dd533beb9748407d9830bc41b6f7d2b91 Deleted: sha256:c608028cca98f590c7b9f85bc7cf0b50e78d3ba06a22e69bc18fc98db4bc75ec Deleted: sha256:49b140b663f0399dda36e2b08c4904b0e6fced46bed34c92d215aeaa265406a9 Deleted: sha256:a4076b212fcc30b06cbba5a3c0925e039bd594683c1cfe36212879ea3c76d81d Untagged: localhost:34203/kubevirt/virt-handler:devel Untagged: localhost:34203/kubevirt/virt-handler@sha256:fbb4d39b61dc19edfb0cef43922be3d3af71d13bbcd184e548ad1141caf1cc3c Deleted: sha256:71ba6d367fcb5bef8c76e982cded0a8ceccb680166b7c04330a820d09790ca3a Deleted: sha256:2323fef87afeeb2ef21be59e491e546e951e97a906b4344078b5f46af296f5b7 Deleted: sha256:419525c8661d0e23e033ee6b1f69710dfac252e468a519f0a259d20ff532c8ac Deleted: sha256:052f630b674eb883a1a32ebc9e239d6acb62bb4d1b168bb7adb3d3a1a0a23a32 Untagged: localhost:34203/kubevirt/virt-api:devel Untagged: localhost:34203/kubevirt/virt-api@sha256:4155b2343fdd20b6cb49375ddaeba948604c17dcc49f1bed0c79088baa85618e Deleted: sha256:43d764676acf076920adbfd9a1db71c6d7404d04d60bbbaa195e8f4f35adb52a Deleted: sha256:a51bd5d09965c5d45a7d55451956c62b25e9fc954b227b4127dbbd8dabe71d72 Deleted: sha256:daa748adfc1b217b53bc6b3385915c947960746dbeb0f1f677f1de52457c5ecc Deleted: sha256:ef827ba05eae546d21c41bf0325ff59409a2ae613d5b6f2062651f8a2c7baa65 Untagged: localhost:34203/kubevirt/subresource-access-test:devel Untagged: localhost:34203/kubevirt/subresource-access-test@sha256:facecb136146ec3e6c6374fa3efc7b2818f6ba7c32cf674888123516d1ebcc45 Deleted: sha256:1f2db7430e063631bcdb67457d1f2cb5ce2dccd1e81c8204fbd9934fcbbde4c4 Deleted: sha256:b94a6189b968b55e1fa233faa4eb05c7b8de82ff1fefeef8c73a8676e2b052b1 Deleted: sha256:50a7c9cb2ab3fa8b69938763107fadd32fdcc72f6cc6b5130498ec8c600c10b7 Deleted: sha256:679add842a90945216cf6c08bb0234ba4032f18f046e9535604ccdc8d193742c Untagged: localhost:34203/kubevirt/example-hook-sidecar:devel Untagged: localhost:34203/kubevirt/example-hook-sidecar@sha256:8998afd40ed8118b9a859850eabb3d2e4b54be988c3367a7ce50c90244335189 Deleted: sha256:55ad38969208c62709bd28ccbe0d10bbf45f937e71bb903f858d37c286b3ce8b Deleted: sha256:aa614029bd984573644f8350906cf22859f60e3bf2886faf3c4ee464a2834bc0 Deleted: sha256:607988f78c456e52d676930f6cc0a280bac6f857315fe7a263827e37515058bc Deleted: sha256:ef8996c201e0a648658562f21ddbc9eee3ae2c14e330ef95c00468d7640b58fa sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> e9589b9dbfb3 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 6526953b7273 Step 5/8 : USER 1001 ---> Using cache ---> 0da81e671cc6 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 44c8fbc7ab47 Removing intermediate container d71a88a3cdf3 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 1dbbb9012042 ---> 85a6d33b32ff Removing intermediate container 1dbbb9012042 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-controller" '' ---> Running in a84b46ae8b22 ---> 3bb8a2a313f4 Removing intermediate container a84b46ae8b22 Successfully built 3bb8a2a313f4 Sending build context to Docker daemon 41.02 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 8826ac178c51 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 5eb474bfa821 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> 05f245dc53f4 Removing intermediate container 8a608d80735a Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 7c649c1f42cf Removing intermediate container c1bcfcfee73d Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 636c365fbd75  ---> d54fa5389dec Removing intermediate container 636c365fbd75 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in b735f801140b  ---> 18025d7360f2 Removing intermediate container b735f801140b Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 5d6cce3eb58f Removing intermediate container 29e3b2464723 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in b852abcb1783 ---> 65131493179e Removing intermediate container b852abcb1783 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-launcher" '' ---> Running in 2136eb3e8f46 ---> f295d8042aaf Removing intermediate container 2136eb3e8f46 Successfully built f295d8042aaf Sending build context to Docker daemon 40.15 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 3892107d4575 Removing intermediate container 5d7bdd344e1d Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 32d5e7c918a2 ---> 7db32aae3462 Removing intermediate container 32d5e7c918a2 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-handler" '' ---> Running in 9ff3ede1f253 ---> c500b36aee04 Removing intermediate container 9ff3ede1f253 Successfully built c500b36aee04 Sending build context to Docker daemon 37.02 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 1a58ff1483fa Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 87e30c5b4065 Step 5/8 : USER 1001 ---> Using cache ---> e889af541bd0 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 9fe85b8cc01f Removing intermediate container fdecbdba2d8a Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in cfe36967cfa6 ---> b5224c2a5d3e Removing intermediate container cfe36967cfa6 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-api" '' ---> Running in 2f72f5db784f ---> 09c22f157d68 Removing intermediate container 2f72f5db784f Successfully built 09c22f157d68 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/7 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 8e1d737ded1f Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 104e48aa676f Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 4ed9f69e6653 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 694cf1afe619 Successfully built 694cf1afe619 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/5 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> d130857891a9 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "vm-killer" '' ---> Using cache ---> 0b1469b868f8 Successfully built 0b1469b868f8 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 496290160351 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 3b36b527fef8 Step 3/7 : ENV container docker ---> Using cache ---> b3ada414d649 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 337be6171fcb Step 5/7 : ADD entry-point.sh / ---> Using cache ---> a98a961fa5a1 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 19baf5d1aab8 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "registry-disk-v1alpha" '' ---> Using cache ---> caee040db85c Successfully built caee040db85c Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:34684/kubevirt/registry-disk-v1alpha:devel ---> caee040db85c Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 453ad127b9bc Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 28cedfe7d642 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> b0932ddb9d63 Successfully built b0932ddb9d63 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:34684/kubevirt/registry-disk-v1alpha:devel ---> caee040db85c Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b591880b7a09 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 9848462e6b89 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 901ffca67a01 Successfully built 901ffca67a01 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:34684/kubevirt/registry-disk-v1alpha:devel ---> caee040db85c Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b591880b7a09 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 4089bb58f7c7 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 13c6fa3b7f0d Successfully built 13c6fa3b7f0d Sending build context to Docker daemon 34.04 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> f9cd90a6a0ef Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> df6f2d83c1d6 Step 5/8 : USER 1001 ---> Using cache ---> 56a7b7e6b8ff Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 7964f07ab47e Removing intermediate container d59c6c2de9c4 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 6cf2d4d6772b ---> 76b2ff885007 Removing intermediate container 6cf2d4d6772b Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "subresource-access-test" '' ---> Running in 48c738d53204 ---> 1478bab367c1 Removing intermediate container 48c738d53204 Successfully built 1478bab367c1 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/9 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> c1e9e769c4ba Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 6729c465203a Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 2aee087083e8 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> e3795172dd73 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 0de2fc4b917f Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "winrmcli" '' ---> Using cache ---> e7206f6d248e Successfully built e7206f6d248e Sending build context to Docker daemon 35.17 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b730b4ed65df Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> ad291d257a0b Removing intermediate container f254671c3878 Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 8731e5eb9743 ---> b9e6f47ca1c3 Removing intermediate container 8731e5eb9743 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Running in 6bb7faa7f313 ---> 1e280cd8862d Removing intermediate container 6bb7faa7f313 Successfully built 1e280cd8862d hack/build-docker.sh push The push refers to a repository [localhost:34684/kubevirt/virt-controller] 4c89cadc4ca5: Preparing ff9b9e61b9df: Preparing 891e1e4ef82a: Preparing ff9b9e61b9df: Pushed 4c89cadc4ca5: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:8fe28aa71eea74e265662e57b8d36ee4fb2ad17d22b09001ccc36cd50a52ee84 size: 949 The push refers to a repository [localhost:34684/kubevirt/virt-launcher] 5602429fed0d: Preparing 4a765d132cb5: Preparing 45ef33143faa: Preparing dacff39c0f5b: Preparing a64dc6cc3359: Preparing cfcba35fba84: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing a64dc6cc3359: Waiting 5eefb9960a36: Preparing cfcba35fba84: Waiting 891e1e4ef82a: Preparing da38cf808aa5: Waiting b83399358a92: Waiting 186d8b3e4fd8: Waiting fa6154170bf5: Waiting dacff39c0f5b: Pushed 4a765d132cb5: Pushed 5602429fed0d: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 45ef33143faa: Pushed cfcba35fba84: Pushed a64dc6cc3359: Pushed 5eefb9960a36: Pushed devel: digest: sha256:1abe4fc550c470125c830926bf0279a339586b9b1ffd7b9695e50a97324083d6 size: 2828 The push refers to a repository [localhost:34684/kubevirt/virt-handler] 1c9f3612800e: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 1c9f3612800e: Pushed devel: digest: sha256:4ed6056f5c14e3287e979832b423d6b3364a7697aef535392297e5184eb3bc03 size: 741 The push refers to a repository [localhost:34684/kubevirt/virt-api] d42b3a5f891b: Preparing 5f1414e2d326: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 5f1414e2d326: Pushed d42b3a5f891b: Pushed devel: digest: sha256:21d04ca54c6e46ed6be77a7920cb3a17d2f68c3d64cccfbe5ba7d2677d4f675d size: 948 The push refers to a repository [localhost:34684/kubevirt/disks-images-provider] 2e0da09ca39e: Preparing 4fe8becbb60f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 2e0da09ca39e: Pushed 4fe8becbb60f: Pushed devel: digest: sha256:e0b18a1418fcb677c9d502ac2e553797a3700a6e6bff9e412f5f919faafc8570 size: 948 The push refers to a repository [localhost:34684/kubevirt/vm-killer] 7b031fa3032f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 7b031fa3032f: Pushed devel: digest: sha256:d4f193309ca1f14b26eb7590e6627c2619a1cbbcde883d730eca499cd8a5e974 size: 740 The push refers to a repository [localhost:34684/kubevirt/registry-disk-v1alpha] bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing bfd12fa374fa: Pushed 18ac8ad2aee9: Pushed 132d61a890c5: Pushed devel: digest: sha256:e05fe4ee65fe45f48a25a6a776be3dfbc494db81e799404ddc948cbc4622508f size: 948 The push refers to a repository [localhost:34684/kubevirt/cirros-registry-disk-demo] 0ee9a1ddc5f8: Preparing bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing 18ac8ad2aee9: Mounted from kubevirt/registry-disk-v1alpha 132d61a890c5: Mounted from kubevirt/registry-disk-v1alpha bfd12fa374fa: Mounted from kubevirt/registry-disk-v1alpha 0ee9a1ddc5f8: Pushed devel: digest: sha256:878520df112ee515b2aa937a9c9f761a18a56bfa473dac1bbcb1b0f67a7ed9d2 size: 1160 The push refers to a repository [localhost:34684/kubevirt/fedora-cloud-registry-disk-demo] e52196ed8281: Preparing bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing bfd12fa374fa: Mounted from kubevirt/cirros-registry-disk-demo 132d61a890c5: Mounted from kubevirt/cirros-registry-disk-demo 18ac8ad2aee9: Mounted from kubevirt/cirros-registry-disk-demo e52196ed8281: Pushed devel: digest: sha256:69cdde0237ca83ba2a1b0c7bef9a3d02f76bbcee3c094ecec74b1c2627de8a89 size: 1161 The push refers to a repository [localhost:34684/kubevirt/alpine-registry-disk-demo] 800a7891dfaa: Preparing bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing bfd12fa374fa: Mounted from kubevirt/fedora-cloud-registry-disk-demo 132d61a890c5: Mounted from kubevirt/fedora-cloud-registry-disk-demo 18ac8ad2aee9: Mounted from kubevirt/fedora-cloud-registry-disk-demo 800a7891dfaa: Pushed devel: digest: sha256:78f82db96351724e728d99ed26dc4d267a05e809f3a6bd8a51b5491ec1e73d93 size: 1160 The push refers to a repository [localhost:34684/kubevirt/subresource-access-test] deca89f19ecd: Preparing 3c1237181850: Preparing 891e1e4ef82a: Preparing 3c1237181850: Pushed 891e1e4ef82a: Mounted from kubevirt/vm-killer deca89f19ecd: Pushed devel: digest: sha256:eafb88d0568648acbe52b084d87d4f5c49d316eebfa249315d1e108b2ac2b7ac size: 948 The push refers to a repository [localhost:34684/kubevirt/winrmcli] bf2bff760365: Preparing 589098974698: Preparing 6e22155a44ef: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test bf2bff760365: Pushed 6e22155a44ef: Pushed 589098974698: Pushed devel: digest: sha256:045705e183ba36d0c98ca8f5a9d5b7486f44fbae7e66b6a449088f22d7865f6f size: 1165 The push refers to a repository [localhost:34684/kubevirt/example-hook-sidecar] 7a0c609fafac: Preparing 39bae602f753: Preparing 7a0c609fafac: Pushed 39bae602f753: Pushed devel: digest: sha256:1666498738a4d5ef1e37a232b415438a0c570cefef3c5725195316ad2af239c9 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-88-gaafed74 ++ KUBEVIRT_VERSION=v0.7.0-88-gaafed74 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:34684/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ wc -l ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-88-gaafed74 ++ KUBEVIRT_VERSION=v0.7.0-88-gaafed74 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:34684/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-release ]] + [[ k8s-1.10.3-release =~ .*-dev ]] + [[ k8s-1.10.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created service "virt-api" created deployment.extensions "virt-api" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79764579-8cq6l 0/1 ContainerCreating 0 3s virt-api-7d79764579-ldv52 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-6khl6 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-d7ctw 0/1 ContainerCreating 0 3s virt-handler-pp5qb 0/1 ContainerCreating 0 3s virt-handler-vtwhk 0/1 ContainerCreating 0 3s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-6jzk7 0/1 ContainerCreating 0 2s disks-images-provider-fr2qm 0/1 ContainerCreating 0 2s virt-api-7d79764579-8cq6l 0/1 ContainerCreating 0 4s virt-api-7d79764579-ldv52 0/1 ContainerCreating 0 4s virt-controller-7d57d96b65-6khl6 0/1 ContainerCreating 0 4s virt-controller-7d57d96b65-d7ctw 0/1 ContainerCreating 0 4s virt-handler-pp5qb 0/1 ContainerCreating 0 4s virt-handler-vtwhk 0/1 ContainerCreating 0 4s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ grep false ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-6jzk7 1/1 Running 0 1m disks-images-provider-fr2qm 1/1 Running 0 1m etcd-node01 1/1 Running 0 14m kube-apiserver-node01 1/1 Running 0 14m kube-controller-manager-node01 1/1 Running 0 14m kube-dns-86f4d74b45-mhcjf 3/3 Running 0 15m kube-flannel-ds-b8mwq 1/1 Running 0 15m kube-flannel-ds-btnd2 1/1 Running 0 14m kube-proxy-dhx85 1/1 Running 0 14m kube-proxy-gmxfp 1/1 Running 0 15m kube-scheduler-node01 1/1 Running 0 14m virt-api-7d79764579-8cq6l 1/1 Running 0 1m virt-api-7d79764579-ldv52 1/1 Running 1 1m virt-controller-7d57d96b65-6khl6 1/1 Running 0 1m virt-controller-7d57d96b65-d7ctw 1/1 Running 0 1m virt-handler-pp5qb 1/1 Running 0 1m virt-handler-vtwhk 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + [[ k8s-1.10.3-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532091354 Will run 141 of 141 specs Pod name: disks-images-provider-6jzk7 Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-fr2qm Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8cq6l Pod phase: Running level=info timestamp=2018-07-20T12:56:55.142328Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-20T12:56:57.005698Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-20T12:56:59.325622Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/20 12:57:03 http: TLS handshake error from 10.244.0.1:50680: EOF 2018/07/20 12:57:13 http: TLS handshake error from 10.244.0.1:50704: EOF 2018/07/20 12:57:23 http: TLS handshake error from 10.244.0.1:50728: EOF level=info timestamp=2018-07-20T12:57:25.259429Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-20T12:57:27.147510Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-20T12:57:29.405221Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/20 12:57:33 http: TLS handshake error from 10.244.0.1:50752: EOF level=info timestamp=2018-07-20T12:57:37.110530Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-20T12:57:37.114408Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/20 12:57:43 http: TLS handshake error from 10.244.0.1:50776: EOF 2018/07/20 12:57:53 http: TLS handshake error from 10.244.0.1:50800: EOF level=info timestamp=2018-07-20T12:57:55.369180Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-7d79764579-ldv52 Pod phase: Running 2018/07/20 12:55:25 http: TLS handshake error from 10.244.1.1:60302: EOF 2018/07/20 12:55:35 http: TLS handshake error from 10.244.1.1:60308: EOF 2018/07/20 12:55:45 http: TLS handshake error from 10.244.1.1:60314: EOF 2018/07/20 12:55:55 http: TLS handshake error from 10.244.1.1:60320: EOF 2018/07/20 12:56:05 http: TLS handshake error from 10.244.1.1:60326: EOF 2018/07/20 12:56:15 http: TLS handshake error from 10.244.1.1:60332: EOF 2018/07/20 12:56:25 http: TLS handshake error from 10.244.1.1:60338: EOF 2018/07/20 12:56:35 http: TLS handshake error from 10.244.1.1:60344: EOF 2018/07/20 12:56:45 http: TLS handshake error from 10.244.1.1:60350: EOF 2018/07/20 12:56:55 http: TLS handshake error from 10.244.1.1:60356: EOF 2018/07/20 12:57:05 http: TLS handshake error from 10.244.1.1:60362: EOF 2018/07/20 12:57:15 http: TLS handshake error from 10.244.1.1:60368: EOF 2018/07/20 12:57:25 http: TLS handshake error from 10.244.1.1:60374: EOF 2018/07/20 12:57:35 http: TLS handshake error from 10.244.1.1:60380: EOF 2018/07/20 12:57:45 http: TLS handshake error from 10.244.1.1:60386: EOF Pod name: virt-controller-7d57d96b65-6khl6 Pod phase: Running level=info timestamp=2018-07-20T12:50:49.464143Z pos=application.go:173 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-07-20T12:50:49.610135Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-07-20T12:50:49.610221Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-07-20T12:50:49.610246Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-07-20T12:50:49.610268Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-07-20T12:50:49.610284Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-07-20T12:50:49.610301Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-07-20T12:50:49.610317Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-07-20T12:50:50.219948Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-20T12:50:50.220137Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-20T12:50:50.220174Z pos=vmi.go:127 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-20T12:50:50.220205Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-20T12:50:50.220281Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-7d57d96b65-d7ctw Pod phase: Running level=info timestamp=2018-07-20T12:50:50.912341Z pos=application.go:173 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-pp5qb Pod phase: Running level=info timestamp=2018-07-20T12:50:53.567565Z pos=virt-handler.go:89 component=virt-handler hostname=node02 Pod name: virt-handler-vtwhk Pod phase: Running level=info timestamp=2018-07-20T12:50:52.234206Z pos=virt-handler.go:89 component=virt-handler hostname=node01 Failure [121.214 seconds] [BeforeSuite] BeforeSuite /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:46 Timed out after 120.000s. KVM devices are required for testing, but are not present on cluster nodes Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:376 ------------------------------ Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Ran 141 of 0 Specs in 127.981 seconds FAIL! -- 0 Passed | 141 Failed | 0 Pending | 0 Skipped --- FAIL: TestTests (127.99s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh