+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/06/26 12:26:02 Waiting for host: 192.168.66.101:22 2018/06/26 12:26:05 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/26 12:26:17 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 29.507693 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:b4e90813cc682ced8caa13681c62c0972de2d359a7de4acc94da6b12d74a93fd + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/06/26 12:27:02 Waiting for host: 192.168.66.102:22 2018/06/26 12:27:05 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/26 12:27:17 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 43s v1.10.3 node02 Ready 15s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 45s v1.10.3 node02 Ready 17s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33593/kubevirt/virt-controller:devel Untagged: localhost:33593/kubevirt/virt-controller@sha256:6c18fd21f660059a2e1dad994869a84eae8ae294fcbf55178e203c1c20e71482 Deleted: sha256:fa17a7b3d17639fc1368ddcc1db47e0c50f35f82e9b196f9f599db88a201d712 Deleted: sha256:17b06afe4c8e2710504d4a4f8a450c204999d26033d0a32e2002c86066762fba Deleted: sha256:101798e0213025f7706a4d1a553937d1e86372642ab8e9416be374e6c6440a79 Deleted: sha256:b758cb90cbf7c028775387b6f83377a1d710ae7442e004c471db57732f6c5308 Untagged: localhost:33593/kubevirt/virt-launcher:devel Untagged: localhost:33593/kubevirt/virt-launcher@sha256:8e1c39cb42021e3ea858ee3904bb393fa54771604208d40fbb3baa368cdedf77 Deleted: sha256:6d420c93359f55731a4ab7f229c1c0120b00830ad9ccec3559dd33a6982e7435 Deleted: sha256:ac53eb4a04592b79a4dda74a720d3e756868e6eb1e6b23075f3a54ce5a8f3f57 Deleted: sha256:a733c14dc826102bb2a07b02c7ff4247c7e4eb3ce22c6e8a39e14752d1a0bad9 Deleted: sha256:e305eef0e65a551255e875cd74b152cac5ac5fa9c45287ae361a5026ab9dd2bf Deleted: sha256:9e9ba763a27d404c062686ebe35fe5cb181227660ac65f268cb45d20f405e6ab Deleted: sha256:5a55eb46dd0745702a1d1f8bdfa4ac22a3ca0415d0c186ff0f994e7cbfdc652e Deleted: sha256:1dda7d43c646f743cd63b2906f655a01cfdf6602a265a9ad35415b1ec1382fd9 Deleted: sha256:dfa4988555e247c56ac9688da7daa37edc88e7d6fed4fa4071a205d92a26b505 Deleted: sha256:97c89711aed070af5ad4b90530401f4be6b20fd20bca7200768e6a7eef133457 Deleted: sha256:b3d724edb8c0336d6666715c18f9920d4a36214c4d7e6b2f243440363f0568f1 Deleted: sha256:b913665409cfa15337dc3776910f9f1aad9df276799db1d295c825d4ff945e71 Deleted: sha256:86df4aeb2aee92e4855a50d3109db0ce8780c953e754654e5ee0154ea4e5975d Deleted: sha256:dc3ff803b0952c1c23fd7f165c5421d8a157a831c7e6e5119cd2a20688ccaa90 Deleted: sha256:27c358dbfe0684ab1c124103556d55a5473c21ab821307bdedb8947d30a2425c Deleted: sha256:56bb8b9c60fd195323053bc5ace6505b0b31ddc2d518317fa4db938e58691916 Deleted: sha256:412ee6fa3b8161b4e21bb09af70ef1269ee7e5fae13ce3ca77b73d9d45c8ead9 Deleted: sha256:dae6528020c74dd9ae53a3fdf9c53192a68584653c3877a1e20781023fd27702 Deleted: sha256:dccb1b5f040cb6b1e9bfa0c4301b4364e09bf147bf13ca442e1029272624e4f4 Untagged: localhost:33593/kubevirt/virt-handler:devel Untagged: localhost:33593/kubevirt/virt-handler@sha256:f8b429d1e63bda3e304d9c7eec5c93019a878278b0548bcb53a133c3cb5b5824 Deleted: sha256:384b736dfb24dfbbbd3ffdd3876495e85ba2a6f2a1be25118d7f98050f032bdb Deleted: sha256:34c0d02c989df144272e8524bf0b5c58450ebf26d3f790f2bad1b290e80d90a9 Deleted: sha256:851863ffdb7541b30ac831fbad4b9ff97554f0b5a29ec619f82f220c6f1f1439 Deleted: sha256:4e71d18c84704edae66475e02d864e45bc405c14f2eae451a04c3e3529b138eb Untagged: localhost:33593/kubevirt/virt-api:devel Untagged: localhost:33593/kubevirt/virt-api@sha256:0f7d791eace9f3c299e737c293b9f12c98c8bb76cb38dbe3956d857d5ad39a6a Deleted: sha256:7afd5273644b78c3b15e632bae7520beb90a1e1072504d980dbe4b11ce3d644a Deleted: sha256:0327e12c861753b140987099c77c9b891000f8d85bafce1f5d1c77d42c574c1b Deleted: sha256:92d7c0edc41532a7aef027ede418601fdf6f4abf66fb10067415bef45539962e Deleted: sha256:18b1ffab929484021ce24b536679e5ccfa18164737b9dffa100e74947477b2db Untagged: localhost:33593/kubevirt/subresource-access-test:devel Untagged: localhost:33593/kubevirt/subresource-access-test@sha256:2505c940ffc93fb146f1473c84b813f88297a16940655d6ea3ab3ccd83def2a7 Deleted: sha256:e5448cc6555281d77b7b9487e856b667fd456456da1f8091c3fcaf6482f2a13c Deleted: sha256:20ed31321741ba7f7aa6003fb77c6862eeb488e59be32190f0db63087af50fbd Deleted: sha256:519ae1cc129ff5e05db935e42e1ff989da7e2b57d354c7ff98f5b4129ee0c085 Deleted: sha256:14ecbb7b15a37e62430a4df334264bb0a786081bef8797c2e767561117c95f45 sha256:bfa4d0e4a1a6ecc8067d4e64dfd286bfa9c51c74b3def97ee58a46f3832bc088 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:bfa4d0e4a1a6ecc8067d4e64dfd286bfa9c51c74b3def97ee58a46f3832bc088 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 36.24 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 65d6d48cdb35 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> e1ade8663337 Step 5/8 : USER 1001 ---> Using cache ---> 2ce44d6f372a Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> b5b641199ba4 Removing intermediate container 01e39482d363 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 5786c5e41ead ---> 0affac4e7f7c Removing intermediate container 5786c5e41ead Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-controller" '' ---> Running in 80c6b1d38fc8 ---> a6077d470daa Removing intermediate container 80c6b1d38fc8 Successfully built a6077d470daa Sending build context to Docker daemon 38.19 MB Step 1/10 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d4ddb23dff45 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 5563e87cb74c Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> d9561b61f63a Removing intermediate container 6f89ad8a6d29 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> d0b5a38e9a7e Removing intermediate container 6386440062ea Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in a0696ba2e081  ---> 8aed58866ddf Removing intermediate container a0696ba2e081 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 80258287a526  ---> d276c0591bba Removing intermediate container 80258287a526 Step 8/10 : COPY entrypoint.sh libvirtd.sh sh.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> ca356516d512 Removing intermediate container 120d3b5d471f Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in fd9f46325707 ---> f572e2de5117 Removing intermediate container fd9f46325707 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-launcher" '' ---> Running in 4ce1ad43c027 ---> c6761945b445 Removing intermediate container 4ce1ad43c027 Successfully built c6761945b445 Sending build context to Docker daemon 39.56 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 5f290608388e Removing intermediate container fd01685b4795 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 4569f419ec89 ---> cb81f6e89e30 Removing intermediate container 4569f419ec89 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-handler" '' ---> Running in add2ac22e8c4 ---> 314e151ff5a0 Removing intermediate container add2ac22e8c4 Successfully built 314e151ff5a0 Sending build context to Docker daemon 37.01 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 2eeb55f39191 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 56cea32a45d4 Step 5/8 : USER 1001 ---> Using cache ---> d121920c238b Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 5cc43d02b69f Removing intermediate container 3087d87bc376 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 3ddb7a8b5adc ---> 52613fa79508 Removing intermediate container 3ddb7a8b5adc Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-api" '' ---> Running in 73fa621d417b ---> 6af45ff9332b Removing intermediate container 73fa621d417b Successfully built 6af45ff9332b Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:27 ---> 9110ae7f579f Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/7 : ENV container docker ---> Using cache ---> 32cab959eac8 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> c2c2137442bf Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 23885e91f3a7 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> aac5b900eabd Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> 28ad08b15bda Successfully built 28ad08b15bda Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/5 : ENV container docker ---> Using cache ---> 32cab959eac8 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 391fa00b27f9 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "vm-killer" '' ---> Using cache ---> 646250748886 Successfully built 646250748886 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> bcec0ae8107e Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 6696837acee7 Step 3/7 : ENV container docker ---> Using cache ---> 2dd2b1a02be6 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> dd3c4950b5c8 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> d221e0eb5770 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 6506e61a9f41 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "registry-disk-v1alpha" '' ---> Using cache ---> 82054b34b9d5 Successfully built 82054b34b9d5 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32820/kubevirt/registry-disk-v1alpha:devel ---> 82054b34b9d5 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 8f868355b16b Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> f91863ab7d52 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> 7b4f8bf3ec7b Successfully built 7b4f8bf3ec7b Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32820/kubevirt/registry-disk-v1alpha:devel ---> 82054b34b9d5 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d785e1208095 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> b175a8df3359 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> c6d11fb681a3 Successfully built c6d11fb681a3 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32820/kubevirt/registry-disk-v1alpha:devel ---> 82054b34b9d5 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d785e1208095 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 794d2a2da03b Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> acb615d9ecc5 Successfully built acb615d9ecc5 Sending build context to Docker daemon 34.04 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 6e6e1b7931e0 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 9d27e69a25f2 Step 5/8 : USER 1001 ---> Using cache ---> 1760a8e197af Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 6a4f9b7fe162 Removing intermediate container 19756102a6a0 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in feb130c45713 ---> cd255d387463 Removing intermediate container feb130c45713 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "subresource-access-test" '' ---> Running in d4d2c088ccaf ---> ce8a79d58e04 Removing intermediate container d4d2c088ccaf Successfully built ce8a79d58e04 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/9 : ENV container docker ---> Using cache ---> 32cab959eac8 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 8e034c77f534 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 28ec1d482013 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> db78d0286f58 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 7ebe54e98be4 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> a3b04c1816f5 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "winrmcli" '' ---> Using cache ---> d8b42f28d0dd Successfully built d8b42f28d0dd hack/build-docker.sh push The push refers to a repository [localhost:32820/kubevirt/virt-controller] f488b7221f27: Preparing 52069b1f5033: Preparing 39bae602f753: Preparing 52069b1f5033: Pushed f488b7221f27: Pushed 39bae602f753: Pushed devel: digest: sha256:9c704ecee98804804ddefca549a0896c99cbf6c017a5a2598c1438685e0f3124 size: 948 The push refers to a repository [localhost:32820/kubevirt/virt-launcher] f059a1fde06b: Preparing 7463ff68e769: Preparing da70e24a04cb: Preparing 895df689c85b: Preparing 9711dfc96621: Preparing 05dda64adde7: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing 05dda64adde7: Waiting 530cc55618cd: Waiting a1359dc556dd: Preparing 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing 490c7c373332: Waiting 39bae602f753: Waiting f059a1fde06b: Pushed 7463ff68e769: Pushed 895df689c85b: Pushed 530cc55618cd: Pushed 34fa414dfdf6: Pushed 490c7c373332: Pushed a1359dc556dd: Pushed 39bae602f753: Mounted from kubevirt/virt-controller da70e24a04cb: Pushed 05dda64adde7: Pushed 9711dfc96621: Pushed 4b440db36f72: Pushed devel: digest: sha256:ce50bc114312c0105511296d7975402ca2168f76f852fa832e1fc4768f5e4725 size: 2828 The push refers to a repository [localhost:32820/kubevirt/virt-handler] b4f1f8e70ee3: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher b4f1f8e70ee3: Pushed devel: digest: sha256:f01b9ede52dbff85ead3fac566883c6424eeaceb686c37eea99a4972f9221d9e size: 741 The push refers to a repository [localhost:32820/kubevirt/virt-api] d5101292f27f: Preparing 86b4b25303b4: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler 86b4b25303b4: Pushed d5101292f27f: Pushed devel: digest: sha256:3d34b520ff06c736370fdfc0e3aaf6a1055908d86f428d3993daf3210cf4696c size: 948 The push refers to a repository [localhost:32820/kubevirt/disks-images-provider] 623d7fbf1de4: Preparing 5c960d4d68b5: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api 623d7fbf1de4: Pushed 5c960d4d68b5: Pushed devel: digest: sha256:0a295479fe7a26118ed620b31dab0e5ce2381b5bedfc316d3d05f1715919b822 size: 948 The push refers to a repository [localhost:32820/kubevirt/vm-killer] 040d3361950b: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/disks-images-provider 040d3361950b: Pushed devel: digest: sha256:d18930b5e00837347b2d4e7d90bb67cee52fd78b2b6045cafe8fb4a2d57bf855 size: 740 The push refers to a repository [localhost:32820/kubevirt/registry-disk-v1alpha] 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 4cd98e29acca: Pushed 9beeb9a18439: Pushed 6709b2da72b8: Pushed devel: digest: sha256:84dbd641143a741f47182d2940a2166cf33f35acda9e25102fb0f42677504a7a size: 948 The push refers to a repository [localhost:32820/kubevirt/cirros-registry-disk-demo] a1f56c6ae6c7: Preparing 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 4cd98e29acca: Mounted from kubevirt/registry-disk-v1alpha 6709b2da72b8: Mounted from kubevirt/registry-disk-v1alpha 9beeb9a18439: Mounted from kubevirt/registry-disk-v1alpha a1f56c6ae6c7: Pushed devel: digest: sha256:0c2801b5caa8182801b035ef759082d827dcaa912bf763cf88e433d8b43342a2 size: 1160 The push refers to a repository [localhost:32820/kubevirt/fedora-cloud-registry-disk-demo] 500159843b78: Preparing 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 4cd98e29acca: Mounted from kubevirt/cirros-registry-disk-demo 6709b2da72b8: Mounted from kubevirt/cirros-registry-disk-demo 9beeb9a18439: Mounted from kubevirt/cirros-registry-disk-demo 500159843b78: Pushed devel: digest: sha256:8d04f5d5382a7e8a32d1c43fdcd39f6757bc0c4434b1dfbf018397ac9ad67f30 size: 1161 The push refers to a repository [localhost:32820/kubevirt/alpine-registry-disk-demo] 5ec13cd87fb6: Preparing 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Mounted from kubevirt/fedora-cloud-registry-disk-demo 9beeb9a18439: Mounted from kubevirt/fedora-cloud-registry-disk-demo 4cd98e29acca: Mounted from kubevirt/fedora-cloud-registry-disk-demo 5ec13cd87fb6: Pushed devel: digest: sha256:0be033836e659c390945b57947929d10cb471a1056678b618d5ce498b1686522 size: 1160 The push refers to a repository [localhost:32820/kubevirt/subresource-access-test] 7f419f21bfe4: Preparing 2c4f6b64d5e3: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer 2c4f6b64d5e3: Pushed 7f419f21bfe4: Pushed devel: digest: sha256:6eb5aaec6a3e3356b8131d24af0c45d5e89f6f73078a93c5d5229f9993ef9c50 size: 948 The push refers to a repository [localhost:32820/kubevirt/winrmcli] 161ef5381259: Preparing 2bef46eb5bf3: Preparing ac5611d25ed9: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/subresource-access-test 161ef5381259: Pushed ac5611d25ed9: Pushed 2bef46eb5bf3: Pushed devel: digest: sha256:1194326ef21a8b1677e0a924d7d2f4562a9047d1d3bcf85cee32975fdb04b8bf size: 1165 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-alpha.4-3-ge0b5c1d ++ KUBEVIRT_VERSION=v0.7.0-alpha.4-3-ge0b5c1d + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:32820/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-alpha.4-3-ge0b5c1d ++ KUBEVIRT_VERSION=v0.7.0-alpha.4-3-ge0b5c1d + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:32820/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-dev ]] + [[ k8s-1.10.3-dev =~ .*-dev ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created service "virt-api" created deployment.extensions "virt-api" created service "virt-controller" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7586947775-wvk8j 0/1 ContainerCreating 0 2s virt-controller-7d57d96b65-9ng4t 0/1 ContainerCreating 0 2s virt-controller-7d57d96b65-z9pqx 0/1 ContainerCreating 0 2s virt-handler-6hjhs 0/1 ContainerCreating 0 2s virt-handler-rtmzg 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-4rwh2 0/1 ContainerCreating 0 0s disks-images-provider-ps7pr 0/1 ContainerCreating 0 1s virt-api-7586947775-wvk8j 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-9ng4t 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-z9pqx 0/1 ContainerCreating 0 3s virt-handler-6hjhs 0/1 ContainerCreating 0 3s virt-handler-rtmzg 0/1 ContainerCreating 0 3s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-4rwh2 1/1 Running 0 32s disks-images-provider-ps7pr 1/1 Running 0 33s etcd-node01 1/1 Running 0 11m kube-apiserver-node01 1/1 Running 0 11m kube-controller-manager-node01 1/1 Running 0 11m kube-dns-86f4d74b45-9zr4h 3/3 Running 0 12m kube-flannel-ds-9cnxs 1/1 Running 0 12m kube-flannel-ds-dzh48 1/1 Running 0 12m kube-proxy-g2q5m 1/1 Running 0 12m kube-proxy-vptbs 1/1 Running 0 12m kube-scheduler-node01 1/1 Running 0 11m virt-api-7586947775-wvk8j 1/1 Running 0 35s virt-controller-7d57d96b65-9ng4t 1/1 Running 0 35s virt-controller-7d57d96b65-z9pqx 1/1 Running 0 35s virt-handler-6hjhs 1/1 Running 0 35s virt-handler-rtmzg 1/1 Running 0 35s + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + [[ k8s-1.10.3-dev =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:bfa4d0e4a1a6ecc8067d4e64dfd286bfa9c51c74b3def97ee58a46f3832bc088 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1530016802 Will run 134 of 134 specs • [SLOW TEST:114.354 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:17.510 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:29.407 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • Failure [200.699 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 Expected error: : 180000000000 expect: timer expired after 180 seconds not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:64 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-06-26T12:42:45.745653Z pos=utils.go:240 component=tests msg="Created virtual machine pod virt-launcher-testvmintdrz-2jbs5" level=info timestamp=2018-06-26T12:43:02.436264Z pos=utils.go:240 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmintdrz-2jbs5" level=info timestamp=2018-06-26T12:43:04.493752Z pos=utils.go:240 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-06-26T12:43:04.510863Z pos=utils.go:240 component=tests msg="VirtualMachineInstance started." STEP: Expecting the VirtualMachineInstance console level=info timestamp=2018-06-26T12:46:05.091556Z pos=utils.go:1237 component=tests namespace=kubevirt-test-default name=testvmintdrz kind=VirtualMachineInstance uid= msg="Login: [{2 \r\n\r\n\r\nISOLINUX 6.04 6.04-pre1 Copyright (C) 1994-2015 H. Peter Anvin et al\r\nboot: \u001b[?7h\r\n []}]" • ------------------------------ • Failure [30.173 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 An invalid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:58 should reject POST if validation webhoook deems the spec is invalid [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:83 Expected : 504 to equal : 422 /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:103 ------------------------------ • Failure [30.198 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 Expected error: <*errors.StatusError | 0xc420aba120>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ • Failure [30.226 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should remove VirtualMachineInstance once the VMI is marked for deletion [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:204 Expected error: <*errors.StatusError | 0xc420aa22d0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ • Failure [30.381 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should remove owner references on the VirtualMachineInstance if it is orphan deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:217 Expected error: <*errors.StatusError | 0xc420abaea0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ • Failure [30.214 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 Expected error: <*errors.StatusError | 0xc420abb710>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ • Failure [30.294 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 Expected error: <*errors.StatusError | 0xc42003a750>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ STEP: Creating a new VMI • Failure [30.225 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 Expected error: <*errors.StatusError | 0xc4202e8e10>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ • Failure [30.241 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 Expected error: <*errors.StatusError | 0xc42003b5f0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ • Failure [30.183 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 Expected error: <*errors.StatusError | 0xc420944000>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ • Failure [30.245 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 Expected error: <*errors.StatusError | 0xc420a06120>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ STEP: Creating new VMI, not running • Failure [30.280 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 Expected error: <*errors.StatusError | 0xc420a06b40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ STEP: getting an VMI • Failure [30.168 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 Expected error: <*errors.StatusError | 0xc42003acf0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:127 ------------------------------ STEP: getting an VMI • Failure [30.233 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:43 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:54 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 should report 3 cpu cores under guest OS [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:61 Expected error: <*errors.StatusError | 0xc420a075f0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:73 ------------------------------ STEP: Starting a VirtualMachineInstance • Failure [30.430 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:43 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:54 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:107 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc42003a7e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:183 ------------------------------ STEP: Starting a VM S [SKIPPING] [0.738 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:43 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:54 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:107 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:159 ------------------------------ • Failure [30.125 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:43 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:54 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:107 with usupported page size /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:193 should failed to schedule the pod [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:194 Expected error: <*errors.StatusError | 0xc42003b710>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:213 ------------------------------ STEP: Starting a VM • Failure [30.318 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:43 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:237 should have all the device nodes [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:260 Expected error: <*errors.StatusError | 0xc4202e82d0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:262 ------------------------------ • Failure [0.466 seconds] Version /root/go/src/kubevirt.io/kubevirt/tests/version_test.go:35 Check that version parameters where loaded by ldflags in build time /root/go/src/kubevirt.io/kubevirt/tests/version_test.go:46 Should return a good version information struct [It] /root/go/src/kubevirt.io/kubevirt/tests/version_test.go:47 Expected error: <*errors.StatusError | 0xc42003ad80>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "an error on the server (\"service unavailable\") has prevented the request from succeeding", Reason: "InternalError", Details: { Name: "", Group: "", Kind: "", UID: "", Causes: [ { Type: "UnexpectedServerResponse", Message: "service unavailable", Field: "", }, ], RetryAfterSeconds: 0, }, Code: 503, }, } an error on the server ("service unavailable") has prevented the request from succeeding not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/version_test.go:49 ------------------------------ • [SLOW TEST:13.513 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:13.010 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:12.849 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:12.923 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • Failure [30.195 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to three, to two and then to zero replicas [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc4205c67e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:92 ------------------------------ STEP: Create a new VirtualMachineInstance replica set • Failure [30.194 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc4205c7170>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:92 ------------------------------ STEP: Create a new VirtualMachineInstance replica set • Failure [30.253 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should be rejected on POST if spec is invalid [It] /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:107 Expected error: <*errors.StatusError | 0xc4202e8c60>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:92 ------------------------------ STEP: Create a new VirtualMachineInstance replica set • Failure [30.184 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should reject POST if validation webhoook deems the spec is invalid [It] /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:128 Expected error: <*errors.StatusError | 0xc4206741b0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:92 ------------------------------ STEP: Create a new VirtualMachineInstance replica set • Failure [30.229 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up [It] /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 Expected error: <*errors.StatusError | 0xc420674900>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:92 ------------------------------ STEP: Create a new VirtualMachineInstance replica set • Failure [30.238 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove VMIs once it is marked for deletion [It] /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:169 Expected error: <*errors.StatusError | 0xc420675200>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:92 ------------------------------ STEP: Create a new VirtualMachineInstance replica set • Failure [30.223 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove owner references on the VirtualMachineInstance if it is orphan deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:185 Expected error: <*errors.StatusError | 0xc42003b170>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:92 ------------------------------ STEP: Create a new VirtualMachineInstance replica set • Failure [30.162 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume [It] /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 Expected error: <*errors.StatusError | 0xc4202e8360>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:92 ------------------------------ STEP: Create a new VirtualMachineInstance replica set • Failure [30.219 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 with correct permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:51 should be allowed to access subresource endpoint [It] /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:52 Timed out after 30.000s. Expected : Failed to equal : Succeeded /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:118 ------------------------------ ••• ------------------------------ • Failure in Spec Setup (BeforeEach) [30.325 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to implicit pod network [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:364 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc42082e510>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:126 ------------------------------ • Failure in Spec Setup (BeforeEach) [30.139 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to implicit pod network [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:364 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the internet /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc42082ed80>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:126 ------------------------------ • Failure in Spec Setup (BeforeEach) [30.165 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to implicit pod network [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:364 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc42082e1b0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:126 ------------------------------ • Failure [210.256 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to implicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:364 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 30.000s. Expected : Pending To satisfy at least one of these matchers: [%!s(*matchers.EqualMatcher=&{Succeeded}) %!s(*matchers.EqualMatcher=&{Failed})] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:77 ------------------------------ level=info timestamp=2018-06-26T13:01:35.067757Z pos=utils.go:240 component=tests msg="Created virtual machine pod virt-launcher-testvmi9t6mh-tzl84" level=info timestamp=2018-06-26T13:02:08.468689Z pos=utils.go:240 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmi9t6mh-tzl84" level=info timestamp=2018-06-26T13:02:22.451655Z pos=utils.go:240 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-06-26T13:02:49.608144Z pos=utils.go:240 component=tests msg="VirtualMachineInstance started." level=info timestamp=2018-06-26T13:03:06.561203Z pos=utils.go:240 component=tests msg="Created virtual machine pod virt-launcher-testvmi948mm-jlxcx" level=info timestamp=2018-06-26T13:03:06.561371Z pos=utils.go:240 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmi948mm-jlxcx" level=info timestamp=2018-06-26T13:03:06.562086Z pos=utils.go:240 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-06-26T13:03:06.562229Z pos=utils.go:240 component=tests msg="VirtualMachineInstance started." level=info timestamp=2018-06-26T13:03:17.593236Z pos=vmi_networking_test.go:150 component=tests msg="[{1 \r\n$ [$ ]} {3 screen -d -m nc -klp 1500 -e echo -e \"Hello World!\"\r\n$ [$ ]} {5 echo $?\r\n0\r\n [0]}]" level=info timestamp=2018-06-26T13:03:18.092754Z pos=utils.go:240 component=tests msg="Created virtual machine pod virt-launcher-testvmi622ms-ft4pg" level=info timestamp=2018-06-26T13:03:33.079895Z pos=utils.go:240 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmi622ms-ft4pg" level=info timestamp=2018-06-26T13:03:34.660461Z pos=utils.go:240 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-06-26T13:03:35.276999Z pos=utils.go:240 component=tests msg="VirtualMachineInstance started." level=info timestamp=2018-06-26T13:04:06.884601Z pos=utils.go:240 component=tests msg="Created virtual machine pod virt-launcher-testvmig5nzs-kfgqc" level=info timestamp=2018-06-26T13:04:06.884799Z pos=utils.go:240 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmig5nzs-kfgqc" level=info timestamp=2018-06-26T13:04:06.885450Z pos=utils.go:240 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-06-26T13:04:06.885643Z pos=utils.go:240 component=tests msg="VirtualMachineInstance started." level=info timestamp=2018-06-26T13:04:18.273031Z pos=vmi_networking_test.go:150 component=tests msg="[{1 \r\n$ [$ ]} {3 screen -d -m nc -klp 1500 -e echo -e \"Hello World!\"\r\n$ [$ ]} {5 echo $?\r\n0 [0]}]" • ------------------------------ • Failure [30.023 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to implicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:364 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Node [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 30.000s. Expected : Pending To satisfy at least one of these matchers: [%!s(*matchers.EqualMatcher=&{Succeeded}) %!s(*matchers.EqualMatcher=&{Failed})] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:77 ------------------------------ • Failure [4.438 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to implicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:364 with a service matching the vmi exposed /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:277 should be able to reach the vmi based on labels specified on the vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:297 Expected : Failed to equal : Succeeded /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:306 ------------------------------ STEP: starting a pod which tries to reach the vmi via the defined service STEP: waiting for the pod to report a successful connection attempt level=info timestamp=2018-06-26T13:05:25.891914Z pos=vmi_networking_test.go:69 component=tests msg="++ head -n 1\n+++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1\nNcat: Connection refused.\n+ x=\n+ echo ''\n+ '[' '' = 'Hello World!' ']'\n+ echo failed\n+ exit 1\n\nfailed\n" • [SLOW TEST:5.272 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to implicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:364 with a service matching the vmi exposed /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:277 should fail to reach the vmi if an invalid servicename is used /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:308 ------------------------------ •••• ------------------------------ • Failure [30.017 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to explicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:367 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 30.000s. Expected : Pending To satisfy at least one of these matchers: [%!s(*matchers.EqualMatcher=&{Succeeded}) %!s(*matchers.EqualMatcher=&{Failed})] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:77 ------------------------------ • ------------------------------ • Failure [30.030 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to explicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:367 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Node [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 30.000s. Expected : Pending To satisfy at least one of these matchers: [%!s(*matchers.EqualMatcher=&{Succeeded}) %!s(*matchers.EqualMatcher=&{Failed})] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:77 ------------------------------ • [SLOW TEST:5.666 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to explicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:367 with a service matching the vmi exposed /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:277 should be able to reach the vmi based on labels specified on the vmi /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:297 ------------------------------ • ------------------------------ • [SLOW TEST:5.817 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to explicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:367 with a subdomain and a headless service given /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:324 should be able to reach the vmi via its unique fully qualified domain name /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:347 ------------------------------ • [SLOW TEST:36.606 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:381 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:382 ------------------------------ • ------------------------------ • [SLOW TEST:46.397 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 ------------------------------ • [SLOW TEST:94.370 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 ------------------------------ • [SLOW TEST:46.389 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:118 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:119 ------------------------------ • [SLOW TEST:43.677 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:162 ------------------------------ • ------------------------------ • [SLOW TEST:18.206 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 ------------------------------ • [SLOW TEST:19.246 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:78 ------------------------------ •••• ------------------------------ • [SLOW TEST:33.865 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:166 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:22.570 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:166 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:15.333 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:197 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:198 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:199 ------------------------------ • [SLOW TEST:17.566 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:197 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:198 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:229 ------------------------------ • [SLOW TEST:43.315 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:277 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:278 ------------------------------ • [SLOW TEST:24.444 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:300 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:301 ------------------------------ • [SLOW TEST:89.245 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:336 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:337 ------------------------------ • Failure in Spec Teardown (AfterEach) [156.732 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:372 the node controller should react [AfterEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:416 Timed out after 60.000s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:465 ------------------------------ level=info timestamp=2018-06-26T13:16:14.941596Z pos=utils.go:240 component=tests msg="Created virtual machine pod virt-launcher-testvminm4qp-gkdsd" level=info timestamp=2018-06-26T13:16:28.632848Z pos=utils.go:240 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvminm4qp-gkdsd" level=info timestamp=2018-06-26T13:16:29.898754Z pos=utils.go:240 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-06-26T13:16:29.950997Z pos=utils.go:240 component=tests msg="VirtualMachineInstance started." STEP: marking the node as not schedulable STEP: moving stuck vmis to failed state S [SKIPPING] [0.198 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:469 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:474 ------------------------------ S [SKIPPING] [0.152 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:469 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:474 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.157 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:530 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:542 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:538 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.129 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:530 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:579 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:538 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.125 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:66 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:530 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:623 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:538 ------------------------------ •••• ------------------------------ • Failure [180.385 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:775 should result in the VirtualMachineInstance moving to a finalized state [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:776 Timed out after 90.186s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1026 ------------------------------ STEP: Creating the VirtualMachineInstance level=info timestamp=2018-06-26T13:18:54.344797Z pos=utils.go:240 component=tests msg="Created virtual machine pod virt-launcher-testvmidxwpx-fp9s9" • Failure [180.497 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:807 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:808 should result in pod being terminated [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:809 Timed out after 90.158s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1026 ------------------------------ STEP: Creating the VirtualMachineInstance level=info timestamp=2018-06-26T13:21:54.645791Z pos=utils.go:240 component=tests msg="Created virtual machine pod virt-launcher-testvmi86hxj-r67s5" • Failure [0.271 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:807 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:832 should run graceful shutdown [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:833 Expected error: <*errors.errorString | 0xc420421d20>: { s: "No virt-handler on node node01 found", } No virt-handler on node node01 found not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:840 ------------------------------ • Failure [180.307 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:50 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:884 should be in Failed phase [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:885 Timed out after 90.185s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1026 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-06-26T13:24:55.161661Z pos=utils.go:240 component=tests msg="Created virtual machine pod virt-launcher-testvmihnv8j-pj94b" Received interrupt. Emitting contents of GinkgoWriter... --------------------------------------------------------- STEP: Starting a VirtualMachineInstance level=info timestamp=2018-06-26T13:27:55.770158Z pos=utils.go:240 component=tests msg="Created virtual machine pod virt-launcher-testvmi7mhtg-q7t5g" --------------------------------------------------------- Received interrupt. Running AfterSuite... ^C again to terminate immediately