+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release + [[ k8s-1.10.4-release =~ openshift-.* ]] + [[ k8s-1.10.4-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/30 10:55:39 Waiting for host: 192.168.66.101:22 2018/07/30 10:55:42 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/30 10:55:54 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 23.505298 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:e0adab45d043262096cf9e5964b3886d61c947236dad83ba9c15b82f1acb13c2 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/30 10:56:36 Waiting for host: 192.168.66.102:22 2018/07/30 10:56:39 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/30 10:56:51 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 33s v1.10.3 node02 NotReady 9s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n 'node02 NotReady 9s v1.10.3' ']' + echo 'Waiting for all nodes to become ready ...' Waiting for all nodes to become ready ... + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 33s v1.10.3 node02 NotReady 9s v1.10.3 + kubectl_rc=0 + sleep 10 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 44s v1.10.3 node02 Ready 20s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... sha256:559a45ac63f40982ccce3a1b80cb62788566f2032c847ad9c45ee993eb9c48d4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:559a45ac63f40982ccce3a1b80cb62788566f2032c847ad9c45ee993eb9c48d4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.39 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> a776f834c795 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 714b6ef15e78 Step 5/8 : USER 1001 ---> Using cache ---> cadd485aa8f4 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> aa395b1514db Removing intermediate container 93a216a092a2 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 1a259a581545 ---> 0f99de546f7d Removing intermediate container 1a259a581545 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release1" '' "virt-controller" '' ---> Running in c0d8ebb7f3a0 ---> bab03a432180 Removing intermediate container c0d8ebb7f3a0 Successfully built bab03a432180 Sending build context to Docker daemon 43.32 MB Step 1/9 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 795ad92a5172 Step 3/9 : RUN dnf -y install socat genisoimage && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Running in a754c38b06bb  Fedora 28 - x86_64 - Updates 6.0 MB/s | 20 MB 00:03 Virtualization packages from Rawhide built for 197 kB/s | 57 kB 00:00 Fedora 28 - x86_64 15 MB/s | 60 MB 00:03 Last metadata expiration check: 0:00:00 ago on Mon Jul 30 11:00:42 2018. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: genisoimage x86_64 1.1.11-38.fc28 fedora 315 k socat x86_64 1.7.3.2-6.fc28 fedora 297 k Installing dependencies: libusal x86_64 1.1.11-38.fc28 fedora 144 k Transaction Summary ================================================================================ Install 3 Packages Total download size: 755 k Installed size: 2.7 M Downloading Packages: (1/3): socat-1.7.3.2-6.fc28.x86_64.rpm 169 kB/s | 297 kB 00:01 (2/3): libusal-1.1.11-38.fc28.x86_64.rpm 67 kB/s | 144 kB 00:02 (3/3): genisoimage-1.1.11-38.fc28.x86_64.rpm 133 kB/s | 315 kB 00:02 -------------------------------------------------------------------------------- Total 177 kB/s | 755 kB 00:04 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : libusal-1.1.11-38.fc28.x86_64 1/3 Running scriptlet: libusal-1.1.11-38.fc28.x86_64 1/3 Installing : genisoimage-1.1.11-38.fc28.x86_64 2/3 Running scriptlet: genisoimage-1.1.11-38.fc28.x86_64 2/3 Installing : socat-1.7.3.2-6.fc28.x86_64 3/3 Running scriptlet: socat-1.7.3.2-6.fc28.x86_64 3/3 Verifying : socat-1.7.3.2-6.fc28.x86_64 1/3 Verifying : genisoimage-1.1.11-38.fc28.x86_64 2/3 Verifying : libusal-1.1.11-38.fc28.x86_64 3/3 Installed: genisoimage.x86_64 1.1.11-38.fc28 socat.x86_64 1.7.3.2-6.fc28 libusal.x86_64 1.1.11-38.fc28 Complete! 23 files removed ---> 2215a8c681b3 Removing intermediate container a754c38b06bb Step 4/9 : COPY virt-launcher /usr/bin/virt-launcher ---> c4e083b4afcd Removing intermediate container 8ece960cf7b5 Step 5/9 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 0ef4d857f746  ---> 1bacea008477 Removing intermediate container 0ef4d857f746 Step 6/9 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 4f01abb6ca9d  ---> 51a998d761e4 Removing intermediate container 4f01abb6ca9d Step 7/9 : COPY sock-connector /usr/share/kubevirt/virt-launcher/ ---> 4eedad858c22 Removing intermediate container d50b5ed5a4fc Step 8/9 : ENTRYPOINT /usr/bin/virt-launcher ---> Running in f58d2dcb3025 ---> 73dc4a22534c Removing intermediate container f58d2dcb3025 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release1" '' "virt-launcher" '' ---> Running in 10358dc9d17b ---> 7659b9e1905f Removing intermediate container 10358dc9d17b Successfully built 7659b9e1905f Sending build context to Docker daemon 41.69 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 1fb75b662693 Removing intermediate container effefa9160c4 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in a39d23a33e21 ---> f816c2e450bc Removing intermediate container a39d23a33e21 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release1" '' "virt-handler" '' ---> Running in e6125236dceb ---> 6c8b036c4df7 Removing intermediate container e6125236dceb Successfully built 6c8b036c4df7 Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 9bbbc9ec8ccc Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 6ff95ae380a5 Step 5/8 : USER 1001 ---> Using cache ---> 0026fc44bed8 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> c5070548592a Removing intermediate container 1a5ad114002c Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 574e8308f7bf ---> d57412d88a75 Removing intermediate container 574e8308f7bf Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release1" '' "virt-api" '' ---> Running in 785b6637a168 ---> 7d2eb74c39ff Removing intermediate container 785b6637a168 Successfully built 7d2eb74c39ff Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/7 : ENV container docker ---> Using cache ---> d7ee9dd5410a Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 0b64ac188f84 Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> c9569040fd52 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> b0887fd36d1c Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.4-release1" '' ---> Using cache ---> f11f776d3657 Successfully built f11f776d3657 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/5 : ENV container docker ---> Using cache ---> d7ee9dd5410a Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> e96d3e3c109a Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release1" '' "vm-killer" '' ---> Using cache ---> fbe038e0f646 Successfully built fbe038e0f646 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> b7f20b0c4c41 Step 3/7 : ENV container docker ---> Using cache ---> 83fc28f38982 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 604b0b292d97 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 78792d6f56cd Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 7f24cc15e083 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release1" '' "registry-disk-v1alpha" '' ---> Using cache ---> 9f4b71dac01b Successfully built 9f4b71dac01b Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33001/kubevirt/registry-disk-v1alpha:devel ---> 9f4b71dac01b Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 182a374fa98d Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 8020fed2685d Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.4-release1" '' ---> Using cache ---> 138d574edd0d Successfully built 138d574edd0d Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33001/kubevirt/registry-disk-v1alpha:devel ---> 9f4b71dac01b Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2ac2492d03e5 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 1e756e4005e5 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.4-release1" '' ---> Using cache ---> 4381748fa0cd Successfully built 4381748fa0cd Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33001/kubevirt/registry-disk-v1alpha:devel ---> 9f4b71dac01b Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2ac2492d03e5 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 40728bd1fbba Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.4-release1" '' ---> Using cache ---> a0b339dd80c3 Successfully built a0b339dd80c3 Sending build context to Docker daemon 35.59 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 5704030d2070 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 624a72b3ef33 Step 5/8 : USER 1001 ---> Using cache ---> 74157fb56326 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> a341b303883e Removing intermediate container 61dcad730c5b Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in fe629ddb780a ---> 38a8cc1cf568 Removing intermediate container fe629ddb780a Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release1" '' "subresource-access-test" '' ---> Running in 57f8b25e4920 ---> 4d09de4bb4fa Removing intermediate container 57f8b25e4920 Successfully built 4d09de4bb4fa Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/9 : ENV container docker ---> Using cache ---> d7ee9dd5410a Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> e4ae555b2a96 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 4805ef8280c3 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 7c1f17e56984 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> c388427c6a76 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 5da240e34c8d Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release1" '' "winrmcli" '' ---> Using cache ---> a87af23d4e18 Successfully built a87af23d4e18 Sending build context to Docker daemon 36.8 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 58c7014d7bc4 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 495fa54ed2ca Removing intermediate container 96b252fd867d Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 9ba1d1e7a492 ---> ebbb8bf4dfbc Removing intermediate container 9ba1d1e7a492 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.4-release1" '' ---> Running in 0871ae096926 ---> da14b5b3036a Removing intermediate container 0871ae096926 Successfully built da14b5b3036a hack/build-docker.sh push The push refers to a repository [localhost:33001/kubevirt/virt-controller] b120c1814293: Preparing efce1557ba86: Preparing 891e1e4ef82a: Preparing efce1557ba86: Pushed b120c1814293: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:b48c733e75531f6c7ca8c51105c167f9e9b112c78bc38bfc352760d4aed301e8 size: 949 The push refers to a repository [localhost:33001/kubevirt/virt-launcher] eccc50918e17: Preparing efd919bf72e8: Preparing a4e40627811a: Preparing 593de72f0678: Preparing 0cc207be631d: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing b83399358a92: Waiting 186d8b3e4fd8: Waiting fa6154170bf5: Waiting 5eefb9960a36: Waiting 891e1e4ef82a: Waiting da38cf808aa5: Waiting eccc50918e17: Pushed efd919bf72e8: Pushed da38cf808aa5: Pushed b83399358a92: Pushed fa6154170bf5: Pushed 186d8b3e4fd8: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller a4e40627811a: Pushed 0cc207be631d: Pushed 593de72f0678: Pushed 5eefb9960a36: Pushed devel: digest: sha256:743b77163b5ed1d12157e0811b09b70a6c6325875d034cdd529a7336171e633f size: 2620 The push refers to a repository [localhost:33001/kubevirt/virt-handler] cbf111eaab7a: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher cbf111eaab7a: Pushed devel: digest: sha256:4f7d6f475ed1cccf789d09a444c82c6757def44a3d5dd00881b54703739b6613 size: 741 The push refers to a repository [localhost:33001/kubevirt/virt-api] 411c3a467a48: Preparing 1cd776a5872d: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 1cd776a5872d: Pushed 411c3a467a48: Pushed devel: digest: sha256:5920a42df11b334567359bffe41c5a03796e9674d1ef9ad69085d79bb4798509 size: 948 The push refers to a repository [localhost:33001/kubevirt/disks-images-provider] 031ac8f2509a: Preparing df0d85013ae0: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 031ac8f2509a: Pushed df0d85013ae0: Pushed devel: digest: sha256:ec52efa178c672d05a0ec39770ec010bc754f4517958684680604e4babf6309a size: 948 The push refers to a repository [localhost:33001/kubevirt/vm-killer] c6d1250c13a6: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider c6d1250c13a6: Pushed devel: digest: sha256:195193f57d39e24151586d6df6c1d60dbb76580677932130762177a2da9527c3 size: 740 The push refers to a repository [localhost:33001/kubevirt/registry-disk-v1alpha] 3e288742e937: Preparing 7c38bbdf0880: Preparing 25edbec0eaea: Preparing 3e288742e937: Pushed 7c38bbdf0880: Pushed 25edbec0eaea: Pushed devel: digest: sha256:f443d5a5d2f67fc57cfc35393c7f101d463381ac38c2c27d8f582d7a1103a6d8 size: 948 The push refers to a repository [localhost:33001/kubevirt/cirros-registry-disk-demo] f77d824bc427: Preparing 3e288742e937: Preparing 7c38bbdf0880: Preparing 25edbec0eaea: Preparing 3e288742e937: Mounted from kubevirt/registry-disk-v1alpha 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 7c38bbdf0880: Mounted from kubevirt/registry-disk-v1alpha f77d824bc427: Pushed devel: digest: sha256:daa9979221790aa491859417d7ee1aecc3b07eb1667a9432b3b8fc8f02313938 size: 1160 The push refers to a repository [localhost:33001/kubevirt/fedora-cloud-registry-disk-demo] 2257d1449411: Preparing 3e288742e937: Preparing 7c38bbdf0880: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo 7c38bbdf0880: Mounted from kubevirt/cirros-registry-disk-demo 3e288742e937: Mounted from kubevirt/cirros-registry-disk-demo 2257d1449411: Pushed devel: digest: sha256:c789cec6d77615969500c1f0435c5a14966027176d561379bcb2525437f80144 size: 1161 The push refers to a repository [localhost:33001/kubevirt/alpine-registry-disk-demo] 3578f9dc86f2: Preparing 3e288742e937: Preparing 7c38bbdf0880: Preparing 25edbec0eaea: Preparing 3e288742e937: Mounted from kubevirt/fedora-cloud-registry-disk-demo 7c38bbdf0880: Mounted from kubevirt/fedora-cloud-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo 3578f9dc86f2: Pushed devel: digest: sha256:172532edb3351d49222645de45cc4c11218ec4dfb661d5d9ffca54fe41e6c31b size: 1160 The push refers to a repository [localhost:33001/kubevirt/subresource-access-test] ac1586f7c5b7: Preparing c3b63a8b92e2: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer c3b63a8b92e2: Pushed ac1586f7c5b7: Pushed devel: digest: sha256:7f78bce5a56c96bbe45022e31cf5f729d587e9b87feaffa056c43379cad44a6c size: 948 The push refers to a repository [localhost:33001/kubevirt/winrmcli] 03859482cdc2: Preparing a0f8b95b0bdd: Preparing 2aa87109f2ed: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test 03859482cdc2: Pushed 2aa87109f2ed: Pushed a0f8b95b0bdd: Pushed devel: digest: sha256:75f075f936a1ef922c2c312c3602b35394a6f4a3e7061bd74a0342ecef8ea66e size: 1165 The push refers to a repository [localhost:33001/kubevirt/example-hook-sidecar] 50edae79a96b: Preparing 39bae602f753: Preparing 50edae79a96b: Pushed 39bae602f753: Pushed devel: digest: sha256:2db986bc926fb3f47972126691987e2eb4efba20452092c5886db45c344e00a2 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.4-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.4-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.4-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-157-gc9627f2 ++ KUBEVIRT_VERSION=v0.7.0-157-gc9627f2 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33001/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.4-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.4-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.4-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-157-gc9627f2 ++ KUBEVIRT_VERSION=v0.7.0-157-gc9627f2 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33001/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.4-release ]] + [[ k8s-1.10.4-release =~ .*-dev ]] + [[ k8s-1.10.4-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created service "virt-api" created deployment.extensions "virt-api" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79764579-8hx6p 0/1 ContainerCreating 0 2s virt-api-7d79764579-mbmt6 0/1 ContainerCreating 0 1s virt-controller-7d57d96b65-45jcf 0/1 ContainerCreating 0 1s virt-controller-7d57d96b65-6xrgz 0/1 ContainerCreating 0 2s virt-handler-cr9jx 0/1 ContainerCreating 0 2s virt-handler-nd26z 0/1 ContainerCreating 0 1s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running virt-api-7d79764579-8hx6p 0/1 ContainerCreating 0 2s virt-api-7d79764579-mbmt6 0/1 ContainerCreating 0 1s virt-controller-7d57d96b65-45jcf 0/1 ContainerCreating 0 1s virt-controller-7d57d96b65-6xrgz 0/1 ContainerCreating 0 2s virt-handler-cr9jx 0/1 ContainerCreating 0 2s virt-handler-nd26z 0/1 ContainerCreating 0 1s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n false ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-98jhf 1/1 Running 0 1m disks-images-provider-lmn24 1/1 Running 0 1m etcd-node01 1/1 Running 0 8m kube-apiserver-node01 1/1 Running 0 8m kube-controller-manager-node01 1/1 Running 0 8m kube-dns-86f4d74b45-h87ll 3/3 Running 0 9m kube-flannel-ds-j69fz 1/1 Running 0 9m kube-flannel-ds-pm265 1/1 Running 0 9m kube-proxy-848hl 1/1 Running 0 9m kube-proxy-gzjrt 1/1 Running 0 9m kube-scheduler-node01 1/1 Running 0 8m virt-api-7d79764579-8hx6p 1/1 Running 0 1m virt-api-7d79764579-mbmt6 1/1 Running 0 1m virt-controller-7d57d96b65-45jcf 1/1 Running 0 1m virt-controller-7d57d96b65-6xrgz 1/1 Running 0 1m virt-handler-cr9jx 1/1 Running 0 1m virt-handler-nd26z 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/junit.xml' + [[ k8s-1.10.4-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:559a45ac63f40982ccce3a1b80cb62788566f2032c847ad9c45ee993eb9c48d4 go version go1.10 linux/amd64 Waiting for rsyncd to be ready. go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532948809 Will run 151 of 151 specs • [SLOW TEST:144.676 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:15.605 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:32.042 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • [SLOW TEST:5.084 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 with correct permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:51 should be allowed to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:52 ------------------------------ •••2018/07/30 07:11:10 read closing down: EOF 2018/07/30 07:12:01 read closing down: EOF Pod name: disks-images-provider-98jhf Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-lmn24 Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8hx6p Pod phase: Running level=info timestamp=2018-07-30T11:11:27.112895Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" level=info timestamp=2018-07-30T11:11:28.351426Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:11:30 http: TLS handshake error from 10.244.1.1:47056: EOF level=info timestamp=2018-07-30T11:11:33.479568Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:11:33.481035Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:11:40 http: TLS handshake error from 10.244.1.1:47062: EOF level=info timestamp=2018-07-30T11:11:48.729841Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:11:50 http: TLS handshake error from 10.244.1.1:47068: EOF level=info timestamp=2018-07-30T11:11:51.700309Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:11:58.335431Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:12:00 http: TLS handshake error from 10.244.1.1:47076: EOF level=error timestamp=2018-07-30T11:12:01.297171Z pos=subresource.go:97 component=virt-api reason="websocket: close 1006 (abnormal closure): unexpected EOF" msg="error ecountered reading from websocket stream" level=error timestamp=2018-07-30T11:12:01.297307Z pos=subresource.go:106 component=virt-api reason="websocket: close 1006 (abnormal closure): unexpected EOF" msg="Error in websocket proxy" 2018/07/30 11:12:01 http: response.WriteHeader on hijacked connection level=info timestamp=2018-07-30T11:12:01.297471Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmi8kb49/console proto=HTTP/1.1 statusCode=500 contentLength=0 Pod name: virt-api-7d79764579-mbmt6 Pod phase: Running 2018/07/30 11:09:40 http: TLS handshake error from 10.244.0.1:55590: EOF 2018/07/30 11:09:50 http: TLS handshake error from 10.244.0.1:55614: EOF 2018/07/30 11:10:00 http: TLS handshake error from 10.244.0.1:55638: EOF 2018/07/30 11:10:10 http: TLS handshake error from 10.244.0.1:55664: EOF 2018/07/30 11:10:20 http: TLS handshake error from 10.244.0.1:55688: EOF 2018/07/30 11:10:30 http: TLS handshake error from 10.244.0.1:55712: EOF 2018/07/30 11:10:40 http: TLS handshake error from 10.244.0.1:55742: EOF 2018/07/30 11:10:50 http: TLS handshake error from 10.244.0.1:55766: EOF 2018/07/30 11:11:00 http: TLS handshake error from 10.244.0.1:55790: EOF 2018/07/30 11:11:10 http: TLS handshake error from 10.244.0.1:55814: EOF 2018/07/30 11:11:20 http: TLS handshake error from 10.244.0.1:55838: EOF 2018/07/30 11:11:30 http: TLS handshake error from 10.244.0.1:55868: EOF 2018/07/30 11:11:40 http: TLS handshake error from 10.244.0.1:55892: EOF 2018/07/30 11:11:50 http: TLS handshake error from 10.244.0.1:55916: EOF 2018/07/30 11:12:00 http: TLS handshake error from 10.244.0.1:55940: EOF Pod name: virt-controller-7d57d96b65-45jcf Pod phase: Running level=info timestamp=2018-07-30T11:05:25.224784Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-6xrgz Pod phase: Running level=info timestamp=2018-07-30T11:09:30.606737Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimcpwb kind= uid=07c2b16d-93e9-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:09:30.629044Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminkgpm kind= uid=07c4a222-93e9-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:09:30.629126Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminkgpm kind= uid=07c4a222-93e9-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:09:30.661675Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmipw9vm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmipw9vm" level=info timestamp=2018-07-30T11:09:30.665181Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimw69m kind= uid=07c8c436-93e9-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:09:30.665358Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimw69m kind= uid=07c8c436-93e9-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:09:30.798634Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmimcpwb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmimcpwb" level=info timestamp=2018-07-30T11:09:31.190558Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmipw9vm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmipw9vm" level=info timestamp=2018-07-30T11:09:31.988630Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminkgpm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminkgpm" level=info timestamp=2018-07-30T11:09:32.186865Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmimcpwb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmimcpwb" level=info timestamp=2018-07-30T11:09:32.787341Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmimw69m\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmimw69m" level=info timestamp=2018-07-30T11:10:18.474937Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminrx4k kind= uid=244b5367-93e9-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:10:18.476027Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminrx4k kind= uid=244b5367-93e9-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:11:10.530617Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:11:10.530745Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-handler-cr9jx Pod phase: Running level=error timestamp=2018-07-30T11:10:19.517468Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T11:10:19.517498Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmimcpwb" level=info timestamp=2018-07-30T11:10:25.293038Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T11:10:25.293140Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmimcpwb, existing: false\n" level=info timestamp=2018-07-30T11:10:25.293159Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:10:25.293192Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:10:25.294025Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:10:25.294143Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmimcpwb, existing: false\n" level=info timestamp=2018-07-30T11:10:25.294171Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:10:25.294253Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:10:25.294482Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:10:29.758075Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmimcpwb, existing: false\n" level=info timestamp=2018-07-30T11:10:29.758155Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:10:29.758234Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:10:29.758314Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nd26z Pod phase: Running level=info timestamp=2018-07-30T11:11:27.015990Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind=Domain uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T11:11:27.040391Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-30T11:11:27.042139Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:11:27.042246Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi8kb49, existing: true\n" level=info timestamp=2018-07-30T11:11:27.042265Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-30T11:11:27.042285Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:11:27.042301Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:11:27.042339Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T11:11:27.051324Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:11:27.051363Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi8kb49, existing: true\n" level=info timestamp=2018-07-30T11:11:27.051379Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T11:11:27.051398Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:11:27.051414Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:11:27.051449Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:11:27.059578Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." Pod name: subresource-access-tester2p9cp Pod phase: Succeeded Subresource Test Endpoint returned 200 OK Pod name: virt-launcher-testvmi8kb49-54kql Pod phase: Running Pod name: virt-launcher-testvminrx4k-ffmzk Pod phase: Running ------------------------------ • Failure [103.641 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: : { Err: { s: "command terminated with exit code 126", }, Code: 126, } command terminated with exit code 126 not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:88 ------------------------------ level=info timestamp=2018-07-30T11:10:18.414613Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvminrx4k kind=VirtualMachineInstance uid=244b5367-93e9-11e8-b2fe-525500d15501 msg="Created virtual machine pod virt-launcher-testvminrx4k-ffmzk" level=info timestamp=2018-07-30T11:10:32.622589Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvminrx4k kind=VirtualMachineInstance uid=244b5367-93e9-11e8-b2fe-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvminrx4k-ffmzk" level=info timestamp=2018-07-30T11:10:33.556874Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvminrx4k kind=VirtualMachineInstance uid=244b5367-93e9-11e8-b2fe-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T11:10:33.562266Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvminrx4k kind=VirtualMachineInstance uid=244b5367-93e9-11e8-b2fe-525500d15501 msg="VirtualMachineInstance started." level=info timestamp=2018-07-30T11:11:10.455370Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmi8kb49 kind=VirtualMachineInstance uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi8kb49-54kql" level=info timestamp=2018-07-30T11:11:25.920358Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmi8kb49 kind=VirtualMachineInstance uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi8kb49-54kql" level=info timestamp=2018-07-30T11:11:26.883255Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmi8kb49 kind=VirtualMachineInstance uid=43526098-93e9-11e8-b2fe-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T11:11:26.889349Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmi8kb49 kind=VirtualMachineInstance uid=43526098-93e9-11e8-b2fe-525500d15501 msg="VirtualMachineInstance started." STEP: have containerPort in the pod manifest STEP: start the virtual machine with slirp interface level=info timestamp=2018-07-30T11:12:01.323229Z pos=vmi_slirp_interface_test.go:87 component=tests msg= Pod name: disks-images-provider-98jhf Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-lmn24 Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8hx6p Pod phase: Running level=info timestamp=2018-07-30T11:11:27.112895Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" level=info timestamp=2018-07-30T11:11:28.351426Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:11:30 http: TLS handshake error from 10.244.1.1:47056: EOF level=info timestamp=2018-07-30T11:11:33.479568Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:11:33.481035Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:11:40 http: TLS handshake error from 10.244.1.1:47062: EOF level=info timestamp=2018-07-30T11:11:48.729841Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:11:50 http: TLS handshake error from 10.244.1.1:47068: EOF level=info timestamp=2018-07-30T11:11:51.700309Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:11:58.335431Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:12:00 http: TLS handshake error from 10.244.1.1:47076: EOF level=error timestamp=2018-07-30T11:12:01.297171Z pos=subresource.go:97 component=virt-api reason="websocket: close 1006 (abnormal closure): unexpected EOF" msg="error ecountered reading from websocket stream" level=error timestamp=2018-07-30T11:12:01.297307Z pos=subresource.go:106 component=virt-api reason="websocket: close 1006 (abnormal closure): unexpected EOF" msg="Error in websocket proxy" 2018/07/30 11:12:01 http: response.WriteHeader on hijacked connection level=info timestamp=2018-07-30T11:12:01.297471Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmi8kb49/console proto=HTTP/1.1 statusCode=500 contentLength=0 Pod name: virt-api-7d79764579-mbmt6 Pod phase: Running 2018/07/30 11:09:40 http: TLS handshake error from 10.244.0.1:55590: EOF 2018/07/30 11:09:50 http: TLS handshake error from 10.244.0.1:55614: EOF 2018/07/30 11:10:00 http: TLS handshake error from 10.244.0.1:55638: EOF 2018/07/30 11:10:10 http: TLS handshake error from 10.244.0.1:55664: EOF 2018/07/30 11:10:20 http: TLS handshake error from 10.244.0.1:55688: EOF 2018/07/30 11:10:30 http: TLS handshake error from 10.244.0.1:55712: EOF 2018/07/30 11:10:40 http: TLS handshake error from 10.244.0.1:55742: EOF 2018/07/30 11:10:50 http: TLS handshake error from 10.244.0.1:55766: EOF 2018/07/30 11:11:00 http: TLS handshake error from 10.244.0.1:55790: EOF 2018/07/30 11:11:10 http: TLS handshake error from 10.244.0.1:55814: EOF 2018/07/30 11:11:20 http: TLS handshake error from 10.244.0.1:55838: EOF 2018/07/30 11:11:30 http: TLS handshake error from 10.244.0.1:55868: EOF 2018/07/30 11:11:40 http: TLS handshake error from 10.244.0.1:55892: EOF 2018/07/30 11:11:50 http: TLS handshake error from 10.244.0.1:55916: EOF 2018/07/30 11:12:00 http: TLS handshake error from 10.244.0.1:55940: EOF Pod name: virt-controller-7d57d96b65-45jcf Pod phase: Running level=info timestamp=2018-07-30T11:05:25.224784Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-6xrgz Pod phase: Running level=info timestamp=2018-07-30T11:09:30.606737Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimcpwb kind= uid=07c2b16d-93e9-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:09:30.629044Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminkgpm kind= uid=07c4a222-93e9-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:09:30.629126Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminkgpm kind= uid=07c4a222-93e9-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:09:30.661675Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmipw9vm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmipw9vm" level=info timestamp=2018-07-30T11:09:30.665181Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimw69m kind= uid=07c8c436-93e9-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:09:30.665358Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimw69m kind= uid=07c8c436-93e9-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:09:30.798634Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmimcpwb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmimcpwb" level=info timestamp=2018-07-30T11:09:31.190558Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmipw9vm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmipw9vm" level=info timestamp=2018-07-30T11:09:31.988630Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminkgpm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminkgpm" level=info timestamp=2018-07-30T11:09:32.186865Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmimcpwb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmimcpwb" level=info timestamp=2018-07-30T11:09:32.787341Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmimw69m\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmimw69m" level=info timestamp=2018-07-30T11:10:18.474937Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminrx4k kind= uid=244b5367-93e9-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:10:18.476027Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminrx4k kind= uid=244b5367-93e9-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:11:10.530617Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:11:10.530745Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-handler-cr9jx Pod phase: Running level=error timestamp=2018-07-30T11:10:19.517468Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T11:10:19.517498Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmimcpwb" level=info timestamp=2018-07-30T11:10:25.293038Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T11:10:25.293140Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmimcpwb, existing: false\n" level=info timestamp=2018-07-30T11:10:25.293159Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:10:25.293192Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:10:25.294025Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:10:25.294143Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmimcpwb, existing: false\n" level=info timestamp=2018-07-30T11:10:25.294171Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:10:25.294253Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:10:25.294482Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:10:29.758075Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmimcpwb, existing: false\n" level=info timestamp=2018-07-30T11:10:29.758155Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:10:29.758234Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:10:29.758314Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmimcpwb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nd26z Pod phase: Running level=info timestamp=2018-07-30T11:11:27.015990Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind=Domain uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T11:11:27.040391Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-30T11:11:27.042139Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:11:27.042246Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi8kb49, existing: true\n" level=info timestamp=2018-07-30T11:11:27.042265Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-30T11:11:27.042285Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:11:27.042301Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:11:27.042339Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T11:11:27.051324Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:11:27.051363Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi8kb49, existing: true\n" level=info timestamp=2018-07-30T11:11:27.051379Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T11:11:27.051398Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:11:27.051414Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:11:27.051449Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:11:27.059578Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi8kb49 kind= uid=43526098-93e9-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." Pod name: subresource-access-tester2p9cp Pod phase: Succeeded Subresource Test Endpoint returned 200 OK Pod name: virt-launcher-testvmi8kb49-54kql Pod phase: Running Pod name: virt-launcher-testvminrx4k-ffmzk Pod phase: Running • Failure [0.846 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface with custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: : { Err: { s: "command terminated with exit code 126", }, Code: 126, } command terminated with exit code 126 not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:88 ------------------------------ STEP: have containerPort in the pod manifest STEP: start the virtual machine with slirp interface level=info timestamp=2018-07-30T11:12:02.169721Z pos=vmi_slirp_interface_test.go:87 component=tests msg= •••••••••••2018/07/30 07:12:55 read closing down: EOF Service cluster-ip-vmi successfully exposed for virtualmachineinstance testvmi4xvv4 ------------------------------ • [SLOW TEST:52.965 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service cluster-ip-target-vmi successfully exposed for virtualmachineinstance testvmi4xvv4 •Service node-port-vmi successfully exposed for virtualmachineinstance testvmi4xvv4 ------------------------------ • [SLOW TEST:8.104 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:124 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:129 ------------------------------ 2018/07/30 07:14:01 read closing down: EOF Service cluster-ip-udp-vmi successfully exposed for virtualmachineinstance testvmi66fc9 • [SLOW TEST:55.404 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:173 Should expose a ClusterIP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:177 ------------------------------ Service node-port-udp-vmi successfully exposed for virtualmachineinstance testvmi66fc9 • [SLOW TEST:8.307 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:205 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:210 ------------------------------ 2018/07/30 07:15:05 read closing down: EOF 2018/07/30 07:15:15 read closing down: EOF Service cluster-ip-vmirs successfully exposed for vmirs replicaset4rzqd • [SLOW TEST:65.892 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VMI replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:253 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:286 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:290 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachine testvmifl6mz VM testvmifl6mz was scheduled to start 2018/07/30 07:16:09 read closing down: EOF • [SLOW TEST:54.159 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:318 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:362 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:363 ------------------------------ ••• ------------------------------ • [SLOW TEST:16.230 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ • [SLOW TEST:6.408 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should remove VirtualMachineInstance once the VMI is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:204 ------------------------------ • Pod name: disks-images-provider-98jhf Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-lmn24 Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8hx6p Pod phase: Running 2018/07/30 11:20:40 http: TLS handshake error from 10.244.1.1:47410: EOF level=info timestamp=2018-07-30T11:20:49.293852Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:20:50 http: TLS handshake error from 10.244.1.1:47416: EOF level=info timestamp=2018-07-30T11:20:52.203131Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:20:58.315023Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:21:00 http: TLS handshake error from 10.244.1.1:47422: EOF 2018/07/30 11:21:10 http: TLS handshake error from 10.244.1.1:47428: EOF level=info timestamp=2018-07-30T11:21:19.319008Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:21:20 http: TLS handshake error from 10.244.1.1:47434: EOF level=info timestamp=2018-07-30T11:21:22.227133Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:21:28.314809Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:21:30 http: TLS handshake error from 10.244.1.1:47440: EOF level=info timestamp=2018-07-30T11:21:33.447047Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:21:33.448510Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:21:40 http: TLS handshake error from 10.244.1.1:47446: EOF Pod name: virt-api-7d79764579-mbmt6 Pod phase: Running 2018/07/30 11:19:20 http: TLS handshake error from 10.244.0.1:57042: EOF 2018/07/30 11:19:30 http: TLS handshake error from 10.244.0.1:57066: EOF 2018/07/30 11:19:40 http: TLS handshake error from 10.244.0.1:57090: EOF 2018/07/30 11:19:50 http: TLS handshake error from 10.244.0.1:57114: EOF 2018/07/30 11:20:00 http: TLS handshake error from 10.244.0.1:57138: EOF 2018/07/30 11:20:10 http: TLS handshake error from 10.244.0.1:57162: EOF 2018/07/30 11:20:20 http: TLS handshake error from 10.244.0.1:57186: EOF 2018/07/30 11:20:30 http: TLS handshake error from 10.244.0.1:57210: EOF 2018/07/30 11:20:40 http: TLS handshake error from 10.244.0.1:57234: EOF 2018/07/30 11:20:50 http: TLS handshake error from 10.244.0.1:57258: EOF 2018/07/30 11:21:00 http: TLS handshake error from 10.244.0.1:57282: EOF 2018/07/30 11:21:10 http: TLS handshake error from 10.244.0.1:57306: EOF 2018/07/30 11:21:20 http: TLS handshake error from 10.244.0.1:57330: EOF 2018/07/30 11:21:30 http: TLS handshake error from 10.244.0.1:57354: EOF 2018/07/30 11:21:40 http: TLS handshake error from 10.244.0.1:57378: EOF Pod name: virt-controller-7d57d96b65-45jcf Pod phase: Running level=info timestamp=2018-07-30T11:05:25.224784Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-6xrgz Pod phase: Running level=info timestamp=2018-07-30T11:16:40.332366Z pos=vm.go:459 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e64d62-93ea-11e8-b2fe-525500d15501 msg="Looking for VirtualMachineInstance Ref" level=info timestamp=2018-07-30T11:16:40.332437Z pos=vm.go:470 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e64d62-93ea-11e8-b2fe-525500d15501 msg="VirtualMachineInstance created bacause testvmi29smc was added." level=info timestamp=2018-07-30T11:16:40.332533Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e27ae7-93ea-11e8-b2fe-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:16:40.332669Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e27ae7-93ea-11e8-b2fe-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:16:40.333291Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e64d62-93ea-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:16:40.333332Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e64d62-93ea-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:16:40.337675Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e27ae7-93ea-11e8-b2fe-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:16:40.337712Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e27ae7-93ea-11e8-b2fe-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:16:40.341544Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e27ae7-93ea-11e8-b2fe-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:16:40.341617Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e27ae7-93ea-11e8-b2fe-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:16:40.355582Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e27ae7-93ea-11e8-b2fe-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:16:40.355630Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e27ae7-93ea-11e8-b2fe-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:16:40.363924Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi29smc\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi29smc" level=info timestamp=2018-07-30T11:16:40.377834Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e27ae7-93ea-11e8-b2fe-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:16:40.377879Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi29smc kind= uid=07e27ae7-93ea-11e8-b2fe-525500d15501 msg="Creating or the VirtualMachineInstance: true" Pod name: virt-handler-cr9jx Pod phase: Running level=error timestamp=2018-07-30T11:16:36.634156Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T11:16:36.634188Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmi4d6nvm22zl" level=info timestamp=2018-07-30T11:16:40.292985Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T11:16:40.293152Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4d6nvm22zl, existing: false\n" level=info timestamp=2018-07-30T11:16:40.293177Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:16:40.293215Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:16:40.294040Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:16:40.294145Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4d6nvm22zl, existing: false\n" level=info timestamp=2018-07-30T11:16:40.294167Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:16:40.294220Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:16:40.294295Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:16:57.114451Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4d6nvm22zl, existing: false\n" level=info timestamp=2018-07-30T11:16:57.114533Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:16:57.114599Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:16:57.114681Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nd26z Pod phase: Running level=error timestamp=2018-07-30T11:17:22.142227Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmi4lnwg kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T11:17:22.142260Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmi4lnwg" level=info timestamp=2018-07-30T11:17:29.094835Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmi4lnwg kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T11:17:29.095003Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4lnwg, existing: false\n" level=info timestamp=2018-07-30T11:17:29.095030Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:17:29.095090Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4lnwg kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:17:29.095349Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4lnwg kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:17:29.095611Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4lnwg, existing: false\n" level=info timestamp=2018-07-30T11:17:29.095646Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:17:29.095717Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4lnwg kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:17:29.095782Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4lnwg kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:17:42.622515Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4lnwg, existing: false\n" level=info timestamp=2018-07-30T11:17:42.622595Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:17:42.622724Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4lnwg kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:17:42.622800Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4lnwg kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmi29smc-c4pzf Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 21 [running]: io.copyBuffer(0x142d000, 0xc4200b4008, 0x0, 0x0, 0xc421906000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4008, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func1(0xc42002e420, 0xc4200c4540) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:264 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:261 +0x15f ------------------------------ • Failure [301.459 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 Timed out after 300.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:157 ------------------------------ STEP: Starting the VirtualMachineInstance STEP: VMI has the running condition • [SLOW TEST:100.719 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:105.704 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ • [SLOW TEST:426.416 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 ------------------------------ • [SLOW TEST:125.666 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ Pod name: disks-images-provider-98jhf Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-lmn24 Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8hx6p Pod phase: Running level=info timestamp=2018-07-30T11:37:58.332771Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:38:00 http: TLS handshake error from 10.244.1.1:48034: EOF 2018/07/30 11:38:10 http: TLS handshake error from 10.244.1.1:48040: EOF 2018/07/30 11:38:20 http: TLS handshake error from 10.244.1.1:48046: EOF level=info timestamp=2018-07-30T11:38:20.426751Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:38:23.230508Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:38:28.358628Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:38:30 http: TLS handshake error from 10.244.1.1:48052: EOF 2018/07/30 11:38:40 http: TLS handshake error from 10.244.1.1:48058: EOF 2018/07/30 11:38:50 http: TLS handshake error from 10.244.1.1:48064: EOF level=info timestamp=2018-07-30T11:38:50.453410Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:38:53.259033Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:38:58.325283Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:39:00 http: TLS handshake error from 10.244.1.1:48070: EOF 2018/07/30 11:39:10 http: TLS handshake error from 10.244.1.1:48076: EOF Pod name: virt-api-7d79764579-mbmt6 Pod phase: Running 2018/07/30 11:36:50 http: TLS handshake error from 10.244.0.1:59562: EOF 2018/07/30 11:37:00 http: TLS handshake error from 10.244.0.1:59586: EOF 2018/07/30 11:37:10 http: TLS handshake error from 10.244.0.1:59610: EOF 2018/07/30 11:37:20 http: TLS handshake error from 10.244.0.1:59634: EOF 2018/07/30 11:37:30 http: TLS handshake error from 10.244.0.1:59658: EOF 2018/07/30 11:37:40 http: TLS handshake error from 10.244.0.1:59682: EOF 2018/07/30 11:37:50 http: TLS handshake error from 10.244.0.1:59706: EOF 2018/07/30 11:38:00 http: TLS handshake error from 10.244.0.1:59730: EOF 2018/07/30 11:38:10 http: TLS handshake error from 10.244.0.1:59754: EOF 2018/07/30 11:38:20 http: TLS handshake error from 10.244.0.1:59778: EOF 2018/07/30 11:38:30 http: TLS handshake error from 10.244.0.1:59802: EOF 2018/07/30 11:38:40 http: TLS handshake error from 10.244.0.1:59826: EOF 2018/07/30 11:38:50 http: TLS handshake error from 10.244.0.1:59850: EOF 2018/07/30 11:39:00 http: TLS handshake error from 10.244.0.1:59874: EOF 2018/07/30 11:39:10 http: TLS handshake error from 10.244.0.1:59898: EOF Pod name: virt-controller-7d57d96b65-45jcf Pod phase: Running level=info timestamp=2018-07-30T11:05:25.224784Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-6xrgz Pod phase: Running level=info timestamp=2018-07-30T11:34:20.298553Z pos=vm.go:459 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fb08de9-93ec-11e8-b2fe-525500d15501 msg="Looking for VirtualMachineInstance Ref" level=info timestamp=2018-07-30T11:34:20.298593Z pos=vm.go:470 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fb08de9-93ec-11e8-b2fe-525500d15501 msg="VirtualMachineInstance created bacause testvmimdkvs was added." level=info timestamp=2018-07-30T11:34:20.298623Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fae13e5-93ec-11e8-b2fe-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:34:20.298647Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fae13e5-93ec-11e8-b2fe-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:34:20.298723Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fb08de9-93ec-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:34:20.298833Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fb08de9-93ec-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:34:20.304389Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fae13e5-93ec-11e8-b2fe-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:34:20.304422Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fae13e5-93ec-11e8-b2fe-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:34:20.305802Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fae13e5-93ec-11e8-b2fe-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:34:20.305831Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fae13e5-93ec-11e8-b2fe-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:34:20.323548Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fae13e5-93ec-11e8-b2fe-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:34:20.323603Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fae13e5-93ec-11e8-b2fe-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:34:20.330331Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmimdkvs\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmimdkvs" level=info timestamp=2018-07-30T11:34:20.336683Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fae13e5-93ec-11e8-b2fe-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:34:20.336988Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmimdkvs kind= uid=7fae13e5-93ec-11e8-b2fe-525500d15501 msg="Creating or the VirtualMachineInstance: true" Pod name: virt-handler-cr9jx Pod phase: Running level=error timestamp=2018-07-30T11:16:36.634156Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T11:16:36.634188Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmi4d6nvm22zl" level=info timestamp=2018-07-30T11:16:40.292985Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T11:16:40.293152Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4d6nvm22zl, existing: false\n" level=info timestamp=2018-07-30T11:16:40.293177Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:16:40.293215Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:16:40.294040Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:16:40.294145Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4d6nvm22zl, existing: false\n" level=info timestamp=2018-07-30T11:16:40.294167Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:16:40.294220Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:16:40.294295Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:16:57.114451Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4d6nvm22zl, existing: false\n" level=info timestamp=2018-07-30T11:16:57.114533Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:16:57.114599Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:16:57.114681Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4d6nvm22zl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nd26z Pod phase: Running level=error timestamp=2018-07-30T11:35:10.636394Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmiwcv68 kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T11:35:10.636434Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmiwcv68" level=info timestamp=2018-07-30T11:35:14.094912Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmiwcv68 kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T11:35:14.095035Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiwcv68, existing: false\n" level=info timestamp=2018-07-30T11:35:14.095086Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:35:14.095134Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwcv68 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:35:14.095870Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiwcv68 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:35:14.095963Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiwcv68, existing: false\n" level=info timestamp=2018-07-30T11:35:14.095993Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:35:14.096031Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwcv68 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:35:14.096102Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiwcv68 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:35:31.117189Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiwcv68, existing: false\n" level=info timestamp=2018-07-30T11:35:31.117426Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:35:31.117540Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwcv68 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:35:31.117641Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiwcv68 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmimdkvs-rjwfj Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 5 [running]: io.copyBuffer(0x142d000, 0xc4200b4008, 0x0, 0x0, 0xc42193a000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4008, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func1(0xc4200ea840, 0xc4200fa120) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:264 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:261 +0x15f • Failure [300.486 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 Timed out after 300.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:157 ------------------------------ STEP: Creating new VMI, not running STEP: Starting the VirtualMachineInstance STEP: VMI has the running condition VM testvmirlgr5 was scheduled to start • [SLOW TEST:17.316 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ VM testvmictf96 was scheduled to stop • [SLOW TEST:90.622 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ • [SLOW TEST:5.714 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.430 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.754 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.742 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/30 07:42:20 read closing down: EOF • [SLOW TEST:49.400 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:68 ------------------------------ 2018/07/30 07:43:14 read closing down: EOF • [SLOW TEST:54.279 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:78 ------------------------------ 2018/07/30 07:44:05 read closing down: EOF 2018/07/30 07:44:07 read closing down: EOF 2018/07/30 07:44:08 read closing down: EOF 2018/07/30 07:44:08 read closing down: EOF 2018/07/30 07:44:08 read closing down: EOF • [SLOW TEST:54.109 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:87 ------------------------------ • [SLOW TEST:16.098 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should wait until the virtual machine is in running state and return a stream interface /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:103 ------------------------------ • [SLOW TEST:30.223 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should fail waiting for the virtual machine instance to be running /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:111 ------------------------------ • [SLOW TEST:30.224 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should fail waiting for the expecter /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:134 ------------------------------ • [SLOW TEST:48.344 seconds] 2018/07/30 07:46:13 read closing down: EOF Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:48.147 seconds] 2018/07/30 07:47:01 read closing down: EOF Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/30 07:49:33 read closing down: EOF • [SLOW TEST:208.068 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/30 07:53:17 read closing down: EOF • [SLOW TEST:209.112 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/30 07:54:48 read closing down: EOF • [SLOW TEST:49.885 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 ------------------------------ • [SLOW TEST:52.432 seconds] 2018/07/30 07:55:41 read closing down: EOF Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 ------------------------------ 2018/07/30 07:56:29 read closing down: EOF • [SLOW TEST:48.584 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 ------------------------------ 2018/07/30 07:59:02 read closing down: EOF • [SLOW TEST:152.989 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 ------------------------------ 2018/07/30 07:59:02 read closing down: EOF 2018/07/30 08:02:59 read closing down: EOF • [SLOW TEST:236.456 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 ------------------------------ Pod name: disks-images-provider-98jhf Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-lmn24 Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8hx6p Pod phase: Running 2018/07/30 12:03:30 http: TLS handshake error from 10.244.1.1:48970: EOF 2018/07/30 12:03:40 http: TLS handshake error from 10.244.1.1:48976: EOF 2018/07/30 12:03:50 http: TLS handshake error from 10.244.1.1:48982: EOF level=info timestamp=2018-07-30T12:03:52.004854Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:03:54.752405Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:03:58.366474Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:04:00 http: TLS handshake error from 10.244.1.1:48988: EOF 2018/07/30 12:04:10 http: TLS handshake error from 10.244.1.1:48994: EOF 2018/07/30 12:04:20 http: TLS handshake error from 10.244.1.1:49000: EOF level=info timestamp=2018-07-30T12:04:22.035558Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:04:24.780341Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:04:28.368134Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:04:30 http: TLS handshake error from 10.244.1.1:49006: EOF level=info timestamp=2018-07-30T12:04:33.586515Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:04:33.588861Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-api-7d79764579-mbmt6 Pod phase: Running 2018/07/30 12:03:00 http: TLS handshake error from 10.244.0.1:35468: EOF 2018/07/30 12:03:10 http: TLS handshake error from 10.244.0.1:35492: EOF level=info timestamp=2018-07-30T12:03:14.482069Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" 2018/07/30 12:03:20 http: TLS handshake error from 10.244.0.1:35524: EOF 2018/07/30 12:03:30 http: TLS handshake error from 10.244.0.1:35548: EOF 2018/07/30 12:03:40 http: TLS handshake error from 10.244.0.1:35572: EOF 2018/07/30 12:03:50 http: TLS handshake error from 10.244.0.1:35596: EOF level=error timestamp=2018-07-30T12:03:57.769237Z pos=subresource.go:85 component=virt-api msg= 2018/07/30 12:03:57 http: response.WriteHeader on hijacked connection level=error timestamp=2018-07-30T12:03:57.769480Z pos=subresource.go:97 component=virt-api reason="read tcp 10.244.0.5:8443->10.244.0.1:33196: use of closed network connection" msg="error ecountered reading from websocket stream" level=info timestamp=2018-07-30T12:03:57.769636Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmirqhlb/console proto=HTTP/1.1 statusCode=200 contentLength=0 2018/07/30 12:04:00 http: TLS handshake error from 10.244.0.1:35620: EOF 2018/07/30 12:04:10 http: TLS handshake error from 10.244.0.1:35644: EOF 2018/07/30 12:04:20 http: TLS handshake error from 10.244.0.1:35668: EOF 2018/07/30 12:04:30 http: TLS handshake error from 10.244.0.1:35692: EOF Pod name: virt-controller-7d57d96b65-45jcf Pod phase: Running level=info timestamp=2018-07-30T11:05:25.224784Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-6xrgz Pod phase: Running level=info timestamp=2018-07-30T11:56:30.421209Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmidmwbn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmidmwbn" level=info timestamp=2018-07-30T11:58:14.145999Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidmwbn kind= uid=d6534c25-93ef-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:58:14.147229Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidmwbn kind= uid=d6534c25-93ef-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:58:14.233398Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmidmwbn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmidmwbn" level=info timestamp=2018-07-30T11:59:03.262103Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihh5rq kind= uid=f3999ec7-93ef-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:59:03.263190Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihh5rq kind= uid=f3999ec7-93ef-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:59:03.324291Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmihh5rq\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmihh5rq" level=info timestamp=2018-07-30T11:59:03.345235Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmihh5rq\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmihh5rq" level=info timestamp=2018-07-30T12:00:14.533992Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihh5rq kind= uid=1e15920e-93f0-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:00:14.534136Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihh5rq kind= uid=1e15920e-93f0-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:01:14.156653Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihh5rq kind= uid=419f782e-93f0-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:01:14.157847Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihh5rq kind= uid=419f782e-93f0-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:01:14.225200Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmihh5rq\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmihh5rq" level=info timestamp=2018-07-30T12:02:59.548586Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirqhlb kind= uid=80713758-93f0-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:02:59.548705Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirqhlb kind= uid=80713758-93f0-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-handler-cr9jx Pod phase: Running level=error timestamp=2018-07-30T11:44:48.828861Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmit5kbm kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T11:44:48.828893Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmit5kbm" level=info timestamp=2018-07-30T11:44:55.293019Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmit5kbm kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T11:44:55.293163Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmit5kbm, existing: false\n" level=info timestamp=2018-07-30T11:44:55.293183Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:44:55.293214Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmit5kbm kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:44:55.293405Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmit5kbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:44:55.294188Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmit5kbm, existing: false\n" level=info timestamp=2018-07-30T11:44:55.294222Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:44:55.294251Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmit5kbm kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:44:55.294299Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmit5kbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:44:59.069102Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmit5kbm, existing: false\n" level=info timestamp=2018-07-30T11:44:59.069190Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:44:59.069267Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmit5kbm kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:44:59.069382Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmit5kbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nd26z Pod phase: Running level=info timestamp=2018-07-30T12:03:14.826623Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:03:14.826640Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:03:14.826676Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmirqhlb kind= uid=80713758-93f0-11e8-b2fe-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T12:03:14.830729Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-30T12:03:14.840844Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirqhlb kind= uid=80713758-93f0-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:03:14.840886Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmirqhlb, existing: true\n" level=info timestamp=2018-07-30T12:03:14.840902Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T12:03:14.840923Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:03:14.840939Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:03:14.840984Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmirqhlb kind= uid=80713758-93f0-11e8-b2fe-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T12:03:14.845413Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirqhlb kind= uid=80713758-93f0-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:03:15.227134Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmihh5rq, existing: false\n" level=info timestamp=2018-07-30T12:03:15.227226Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:03:15.227298Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmihh5rq kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:03:15.227361Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmihh5rq kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmirqhlb-hpgv7 Pod phase: Running 2018/07/30 08:04:33 read closing down: EOF • Failure [94.415 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 Timed out after 40.007s. Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:85 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-07-30T12:02:59.486051Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmirqhlb kind=VirtualMachineInstance uid=80713758-93f0-11e8-b2fe-525500d15501 msg="Created virtual machine pod virt-launcher-testvmirqhlb-hpgv7" level=info timestamp=2018-07-30T12:03:13.656327Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmirqhlb kind=VirtualMachineInstance uid=80713758-93f0-11e8-b2fe-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmirqhlb-hpgv7" level=info timestamp=2018-07-30T12:03:14.665564Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmirqhlb kind=VirtualMachineInstance uid=80713758-93f0-11e8-b2fe-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:03:14.669812Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmirqhlb kind=VirtualMachineInstance uid=80713758-93f0-11e8-b2fe-525500d15501 msg="VirtualMachineInstance started." STEP: Expecting the VirtualMachineInstance console STEP: Killing the watchdog device STEP: Checking that the VirtualMachineInstance has Failed status • [SLOW TEST:35.553 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ volumedisk0 compute • [SLOW TEST:49.047 seconds] Configurations 2018/07/30 08:05:58 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 ------------------------------ • ------------------------------ • [SLOW TEST:17.848 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:164 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.215 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:164 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:216 ------------------------------ •2018/07/30 08:08:11 read closing down: EOF ------------------------------ • [SLOW TEST:112.937 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:340 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:341 ------------------------------ 2018/07/30 08:10:26 read closing down: EOF • [SLOW TEST:134.379 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model equals to passthrough /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:368 should report exactly the same model as node CPU /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:369 ------------------------------ 2018/07/30 08:12:23 read closing down: EOF • [SLOW TEST:117.462 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:392 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:393 ------------------------------ • [SLOW TEST:51.150 seconds] Configurations 2018/07/30 08:13:14 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:413 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:436 ------------------------------ • ------------------------------ • [SLOW TEST:15.771 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:80 ------------------------------ Pod name: disks-images-provider-98jhf Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-lmn24 Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8hx6p Pod phase: Running 2018/07/30 12:15:30 http: TLS handshake error from 10.244.1.1:49404: EOF 2018/07/30 12:15:40 http: TLS handshake error from 10.244.1.1:49410: EOF 2018/07/30 12:15:50 http: TLS handshake error from 10.244.1.1:49416: EOF level=info timestamp=2018-07-30T12:15:56.494349Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:15:57.414673Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:15:58.373479Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:16:00 http: TLS handshake error from 10.244.1.1:49422: EOF 2018/07/30 12:16:10 http: TLS handshake error from 10.244.1.1:49428: EOF 2018/07/30 12:16:20 http: TLS handshake error from 10.244.1.1:49434: EOF level=info timestamp=2018-07-30T12:16:26.540784Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:16:27.442865Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:16:28.164550Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:16:28.250331Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:16:28.427730Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:16:30 http: TLS handshake error from 10.244.1.1:49440: EOF Pod name: virt-api-7d79764579-mbmt6 Pod phase: Running 2018/07/30 12:14:10 http: TLS handshake error from 10.244.0.1:37152: EOF 2018/07/30 12:14:20 http: TLS handshake error from 10.244.0.1:37176: EOF 2018/07/30 12:14:30 http: TLS handshake error from 10.244.0.1:37200: EOF 2018/07/30 12:14:40 http: TLS handshake error from 10.244.0.1:37224: EOF 2018/07/30 12:14:50 http: TLS handshake error from 10.244.0.1:37248: EOF 2018/07/30 12:15:00 http: TLS handshake error from 10.244.0.1:37272: EOF 2018/07/30 12:15:10 http: TLS handshake error from 10.244.0.1:37296: EOF 2018/07/30 12:15:20 http: TLS handshake error from 10.244.0.1:37320: EOF 2018/07/30 12:15:30 http: TLS handshake error from 10.244.0.1:37344: EOF 2018/07/30 12:15:40 http: TLS handshake error from 10.244.0.1:37368: EOF 2018/07/30 12:15:50 http: TLS handshake error from 10.244.0.1:37392: EOF 2018/07/30 12:16:00 http: TLS handshake error from 10.244.0.1:37416: EOF 2018/07/30 12:16:10 http: TLS handshake error from 10.244.0.1:37440: EOF 2018/07/30 12:16:20 http: TLS handshake error from 10.244.0.1:37464: EOF 2018/07/30 12:16:30 http: TLS handshake error from 10.244.0.1:37488: EOF Pod name: virt-controller-7d57d96b65-45jcf Pod phase: Running level=info timestamp=2018-07-30T12:10:57.821436Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihpxx2 kind= uid=9dbedc78-93f1-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:10:57.884109Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihpxx2 kind= uid=9dbedc78-93f1-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:10:58.078093Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmihpxx2\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmihpxx2" level=info timestamp=2018-07-30T12:11:15.755384Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihx98h kind= uid=a86e1720-93f1-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:11:15.794912Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihx98h kind= uid=a86e1720-93f1-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:12:23.894007Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmilbkrz kind= uid=d1122dc6-93f1-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:12:23.895489Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmilbkrz kind= uid=d1122dc6-93f1-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:13:14.633608Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib8dlw kind= uid=ef511891-93f1-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:13:14.634925Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib8dlw kind= uid=ef511891-93f1-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:13:14.904863Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmivv5fd kind= uid=ef7a5251-93f1-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:13:14.904988Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmivv5fd kind= uid=ef7a5251-93f1-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:13:14.946868Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmivv5fd\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmivv5fd" level=info timestamp=2018-07-30T12:13:14.979285Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmivv5fd\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmivv5fd" level=info timestamp=2018-07-30T12:13:30.667747Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizblqb kind= uid=f8dfbecb-93f1-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:13:30.667938Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizblqb kind= uid=f8dfbecb-93f1-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-gqln6 Pod phase: Running level=info timestamp=2018-07-30T12:04:36.490365Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-cr9jx Pod phase: Running level=info timestamp=2018-07-30T12:13:10.295163Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:13:10.295244Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixmnzj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:13:10.295340Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixmnzj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:13:14.036283Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmihx98h, existing: false\n" level=info timestamp=2018-07-30T12:13:14.036363Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:13:14.036431Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmihx98h kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:13:14.036498Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmihx98h kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:13:14.036955Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmihpxx2, existing: false\n" level=info timestamp=2018-07-30T12:13:14.037049Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:13:14.037121Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmihpxx2 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:13:14.037203Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmihpxx2 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:13:14.085431Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmixmnzj, existing: false\n" level=info timestamp=2018-07-30T12:13:14.085464Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:13:14.085504Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixmnzj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:13:14.085567Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixmnzj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nd26z Pod phase: Running level=info timestamp=2018-07-30T12:14:25.890749Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmilbkrz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:14:25.890828Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmilbkrz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:14:29.094821Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmivv5fd kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T12:14:29.094955Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmivv5fd, existing: false\n" level=info timestamp=2018-07-30T12:14:29.094974Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:14:29.095004Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmivv5fd kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:14:29.095114Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmivv5fd kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:14:29.095263Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmivv5fd, existing: false\n" level=info timestamp=2018-07-30T12:14:29.095297Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:14:29.095340Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmivv5fd kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:14:29.095406Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmivv5fd kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:14:41.917416Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmivv5fd, existing: false\n" level=info timestamp=2018-07-30T12:14:41.917496Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:14:41.917569Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmivv5fd kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:14:41.917645Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmivv5fd kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmizblqb-4lqdj Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 5 [running]: io.copyBuffer(0x142d000, 0xc4200b4008, 0x0, 0x0, 0xc421926000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4008, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func1(0xc42034e420, 0xc4200964e0) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:264 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:261 +0x15f • Failure [180.562 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 should attach virt-launcher to it [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:86 Timed out after 90.044s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1046 ------------------------------ level=info timestamp=2018-07-30T12:13:31.015362Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmizblqb kind=VirtualMachineInstance uid=f8dfbecb-93f1-11e8-b2fe-525500d15501 msg="Created virtual machine pod virt-launcher-testvmizblqb-4lqdj" ••••2018/07/30 08:17:23 read closing down: EOF ------------------------------ • [SLOW TEST:51.732 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:174 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/30 08:17:50 read closing down: EOF • [SLOW TEST:26.928 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:174 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:15.117 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:205 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:206 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:207 ------------------------------ • [SLOW TEST:16.856 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:205 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:206 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:237 ------------------------------ • [SLOW TEST:36.317 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:285 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:286 ------------------------------ Pod name: disks-images-provider-98jhf Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-lmn24 Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8hx6p Pod phase: Running 2018/07/30 12:19:10 http: TLS handshake error from 10.244.1.1:49540: EOF 2018/07/30 12:19:20 http: TLS handshake error from 10.244.1.1:49548: EOF level=info timestamp=2018-07-30T12:19:26.704590Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:19:27.620810Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:19:28.376374Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:19:30 http: TLS handshake error from 10.244.1.1:49554: EOF level=info timestamp=2018-07-30T12:19:33.762478Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:19:33.764205Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:19:40 http: TLS handshake error from 10.244.1.1:49560: EOF 2018/07/30 12:19:50 http: TLS handshake error from 10.244.1.1:49566: EOF level=info timestamp=2018-07-30T12:19:56.736452Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:19:57.651199Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:19:58.371804Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:20:00 http: TLS handshake error from 10.244.1.1:49572: EOF 2018/07/30 12:20:10 http: TLS handshake error from 10.244.1.1:49578: EOF Pod name: virt-api-7d79764579-mbmt6 Pod phase: Running 2018/07/30 12:17:50 http: TLS handshake error from 10.244.0.1:37692: EOF 2018/07/30 12:18:00 http: TLS handshake error from 10.244.0.1:37716: EOF 2018/07/30 12:18:10 http: TLS handshake error from 10.244.0.1:37740: EOF 2018/07/30 12:18:20 http: TLS handshake error from 10.244.0.1:37764: EOF 2018/07/30 12:18:30 http: TLS handshake error from 10.244.0.1:37788: EOF 2018/07/30 12:18:40 http: TLS handshake error from 10.244.0.1:37812: EOF 2018/07/30 12:18:50 http: TLS handshake error from 10.244.0.1:37836: EOF 2018/07/30 12:19:00 http: TLS handshake error from 10.244.0.1:37860: EOF 2018/07/30 12:19:10 http: TLS handshake error from 10.244.0.1:37884: EOF 2018/07/30 12:19:20 http: TLS handshake error from 10.244.0.1:37908: EOF 2018/07/30 12:19:30 http: TLS handshake error from 10.244.0.1:37932: EOF 2018/07/30 12:19:40 http: TLS handshake error from 10.244.0.1:37956: EOF 2018/07/30 12:19:50 http: TLS handshake error from 10.244.0.1:37980: EOF 2018/07/30 12:20:00 http: TLS handshake error from 10.244.0.1:38004: EOF 2018/07/30 12:20:10 http: TLS handshake error from 10.244.0.1:38028: EOF Pod name: virt-controller-7d57d96b65-45jcf Pod phase: Running level=info timestamp=2018-07-30T12:16:31.983854Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifngkp kind= uid=64f1117d-93f2-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:17:23.703623Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibpdgw kind= uid=83c5bb9e-93f2-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:17:23.704287Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibpdgw kind= uid=83c5bb9e-93f2-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:17:23.791954Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibpdgw\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibpdgw" level=info timestamp=2018-07-30T12:17:50.627558Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi58vhv kind= uid=93d27fba-93f2-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:17:50.627866Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi58vhv kind= uid=93d27fba-93f2-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:18:05.749488Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmicjxkj kind= uid=9cd581a3-93f2-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:18:05.749610Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmicjxkj kind= uid=9cd581a3-93f2-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:18:05.815689Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmicjxkj\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmicjxkj" level=info timestamp=2018-07-30T12:18:22.601854Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwxpsn kind= uid=a6e17ed7-93f2-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:18:22.603202Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwxpsn kind= uid=a6e17ed7-93f2-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:18:22.671897Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwxpsn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwxpsn" level=info timestamp=2018-07-30T12:18:22.682892Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwxpsn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwxpsn" level=info timestamp=2018-07-30T12:18:58.920121Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmig2psk kind= uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:18:58.920253Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmig2psk kind= uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-gqln6 Pod phase: Running level=info timestamp=2018-07-30T12:04:36.490365Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-cr9jx Pod phase: Running level=info timestamp=2018-07-30T12:18:05.530485Z pos=vm.go:251 component=virt-handler reason="secrets \"nonexistent\" not found" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmi58vhv" level=info timestamp=2018-07-30T12:18:05.549677Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi58vhv, existing: true\n" level=info timestamp=2018-07-30T12:18:05.549787Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-30T12:18:05.549810Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:18:05.549899Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi58vhv kind= uid=93d27fba-93f2-11e8-b2fe-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=error timestamp=2018-07-30T12:18:05.570301Z pos=vm.go:431 component=virt-handler namespace=kubevirt-test-default name=testvmi58vhv kind= uid=93d27fba-93f2-11e8-b2fe-525500d15501 reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi58vhv\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi58vhv, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 93d27fba-93f2-11e8-b2fe-525500d15501, UID in object meta: " msg="Updating the VirtualMachineInstance status failed." level=info timestamp=2018-07-30T12:18:05.570386Z pos=vm.go:251 component=virt-handler reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi58vhv\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi58vhv, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 93d27fba-93f2-11e8-b2fe-525500d15501, UID in object meta: " msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmi58vhv" level=info timestamp=2018-07-30T12:18:05.570443Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi58vhv, existing: false\n" level=info timestamp=2018-07-30T12:18:05.570460Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:18:05.570523Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi58vhv kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:18:05.570599Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi58vhv kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:18:05.570641Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi58vhv, existing: false\n" level=info timestamp=2018-07-30T12:18:05.570656Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:18:05.570688Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi58vhv kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:18:05.572273Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi58vhv kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nd26z Pod phase: Running level=info timestamp=2018-07-30T12:19:18.689003Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-30T12:19:18.700252Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-30T12:19:18.701039Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-30T12:19:18.701235Z pos=cache.go:121 component=virt-handler msg="List domains from sock /var/run/kubevirt/sockets/kubevirt-test-default_testvmig2psk_sock" level=info timestamp=2018-07-30T12:19:18.749015Z pos=vm.go:725 component=virt-handler namespace=kubevirt-test-default name=testvmig2psk kind=Domain uid= msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T12:19:18.802687Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-30T12:19:18.838350Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" level=info timestamp=2018-07-30T12:19:18.851876Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-30T12:19:18.905399Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmig2psk, existing: true\n" level=info timestamp=2018-07-30T12:19:18.905467Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T12:19:18.905492Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:19:18.905509Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:19:18.905589Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T12:19:18.930606Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmig2psk-9c2r7 Pod phase: Running Pod name: vmi-killerf8pdf Pod phase: Succeeded Pod name: vmi-killerkgbsb Pod phase: Succeeded • Failure [79.474 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:309 should recover and continue management [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:310 Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:336 ------------------------------ level=info timestamp=2018-07-30T12:18:59.300656Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="Created virtual machine pod virt-launcher-testvmig2psk-9c2r7" level=info timestamp=2018-07-30T12:19:13.654605Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmig2psk-9c2r7" level=info timestamp=2018-07-30T12:19:14.697610Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:19:14.706591Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="VirtualMachineInstance started." STEP: Crashing the virt-handler STEP: Killing the VirtualMachineInstance level=info timestamp=2018-07-30T12:19:17.800174Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="Created virtual machine pod virt-launcher-testvmig2psk-9c2r7" level=info timestamp=2018-07-30T12:19:17.800236Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmig2psk-9c2r7" level=info timestamp=2018-07-30T12:19:17.800462Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:19:17.800480Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="VirtualMachineInstance started." level=info timestamp=2018-07-30T12:19:18.752052Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmig2psk kind=VirtualMachineInstance uid=bc86f016-93f2-11e8-b2fe-525500d15501 msg="VirtualMachineInstance defined." STEP: Checking that VirtualMachineInstance has 'Failed' phase • [SLOW TEST:8.257 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:340 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:341 ------------------------------ • [SLOW TEST:122.846 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:371 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:410 ------------------------------ • [SLOW TEST:16.341 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with node tainted /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:463 the vmi with tolerations should be scheduled /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:485 ------------------------------ • ------------------------------ • [SLOW TEST:63.753 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:60.563 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.071 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:604 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.060 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:641 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.056 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:685 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ •••• ------------------------------ • [SLOW TEST:67.417 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:837 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:838 ------------------------------ • [SLOW TEST:67.317 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:870 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:871 ------------------------------ 2018/07/30 08:27:56 read closing down: EOF Pod name: disks-images-provider-98jhf Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-lmn24 Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8hx6p Pod phase: Running 2018/07/30 12:27:00 http: TLS handshake error from 10.244.1.1:49826: EOF 2018/07/30 12:27:10 http: TLS handshake error from 10.244.1.1:49832: EOF 2018/07/30 12:27:20 http: TLS handshake error from 10.244.1.1:49838: EOF level=info timestamp=2018-07-30T12:27:27.232564Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:27:28.222564Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:27:28.392470Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:27:30 http: TLS handshake error from 10.244.1.1:49844: EOF level=info timestamp=2018-07-30T12:27:33.502620Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:27:33.503629Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:27:40 http: TLS handshake error from 10.244.1.1:49850: EOF 2018/07/30 12:27:50 http: TLS handshake error from 10.244.1.1:49856: EOF level=info timestamp=2018-07-30T12:27:57.265657Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:27:58.254754Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:27:58.391514Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:28:00 http: TLS handshake error from 10.244.1.1:49862: EOF Pod name: virt-api-7d79764579-mbmt6 Pod phase: Running 2018/07/30 12:26:30 http: TLS handshake error from 10.244.0.1:38944: EOF 2018/07/30 12:26:40 http: TLS handshake error from 10.244.0.1:38968: EOF 2018/07/30 12:26:50 http: TLS handshake error from 10.244.0.1:38992: EOF 2018/07/30 12:27:00 http: TLS handshake error from 10.244.0.1:39016: EOF 2018/07/30 12:27:10 http: TLS handshake error from 10.244.0.1:39040: EOF 2018/07/30 12:27:20 http: TLS handshake error from 10.244.0.1:39064: EOF level=info timestamp=2018-07-30T12:27:23.812142Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" 2018/07/30 12:27:30 http: TLS handshake error from 10.244.0.1:39096: EOF 2018/07/30 12:27:40 http: TLS handshake error from 10.244.0.1:39120: EOF 2018/07/30 12:27:50 http: TLS handshake error from 10.244.0.1:39144: EOF level=error timestamp=2018-07-30T12:27:56.847111Z pos=subresource.go:85 component=virt-api msg= 2018/07/30 12:27:56 http: response.WriteHeader on hijacked connection level=error timestamp=2018-07-30T12:27:56.847442Z pos=subresource.go:97 component=virt-api reason="read tcp 10.244.0.5:8443->10.244.0.1:36768: use of closed network connection" msg="error ecountered reading from websocket stream" level=info timestamp=2018-07-30T12:27:56.847612Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmiz54ck/console proto=HTTP/1.1 statusCode=200 contentLength=0 2018/07/30 12:28:00 http: TLS handshake error from 10.244.0.1:39168: EOF Pod name: virt-controller-7d57d96b65-45jcf Pod phase: Running level=info timestamp=2018-07-30T12:23:50.871130Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmizw672\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-alternative/testvmizw672" level=info timestamp=2018-07-30T12:24:51.554377Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwwwdt kind= uid=8eb69aad-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:24:51.556050Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwwwdt kind= uid=8eb69aad-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:24:52.125134Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwwwdt\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiwwwdt, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8eb69aad-93f3-11e8-b2fe-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwwwdt" level=info timestamp=2018-07-30T12:24:52.302970Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6tdmf kind= uid=8f2920cd-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:24:52.303092Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6tdmf kind= uid=8f2920cd-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:24:52.338519Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6tdmf\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6tdmf" level=info timestamp=2018-07-30T12:24:52.721169Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigbttb kind= uid=8f68f14e-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:24:52.721285Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigbttb kind= uid=8f68f14e-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:24:52.784609Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmigbttb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmigbttb" level=info timestamp=2018-07-30T12:26:00.312275Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminllwv kind= uid=b7b21554-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:26:00.315016Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminllwv kind= uid=b7b21554-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:27:07.469501Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:27:07.469706Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:27:07.555948Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiz54ck\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiz54ck" Pod name: virt-controller-7d57d96b65-gqln6 Pod phase: Running level=info timestamp=2018-07-30T12:04:36.490365Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-4gnh8 Pod phase: Running level=info timestamp=2018-07-30T12:27:24.160024Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T12:27:24.176215Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:27:24.176316Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiz54ck, existing: true\n" level=info timestamp=2018-07-30T12:27:24.176335Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T12:27:24.176357Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:27:24.176373Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:27:24.176418Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T12:27:24.179413Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:27:56.742896Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiz54ck, existing: true\n" level=info timestamp=2018-07-30T12:27:56.742968Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T12:27:56.743009Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:27:56.743027Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:27:56.743103Z pos=vm.go:370 component=virt-handler namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-30T12:27:56.743136Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-30T12:27:56.743695Z pos=vm.go:556 component=virt-handler namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Grace period expired, killing deleted VirtualMachineInstance testvmiz54ck" Pod name: virt-handler-8t4vh Pod phase: Running level=info timestamp=2018-07-30T12:24:50.353639Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind= uid=6a80b596-93f3-11e8-b2fe-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:24:50.354574Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind= uid=6a80b596-93f3-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:24:50.354760Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmizw672, existing: true\n" level=info timestamp=2018-07-30T12:24:50.354801Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-07-30T12:24:50.354836Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:24:50.354898Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind= uid=6a80b596-93f3-11e8-b2fe-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:24:50.354984Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind= uid=6a80b596-93f3-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:24:50.364120Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmizw672, existing: false\n" level=info timestamp=2018-07-30T12:24:50.364172Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:24:50.364257Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:24:50.364336Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:24:57.652376Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmizw672, existing: false\n" level=info timestamp=2018-07-30T12:24:57.652469Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:24:57.652546Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:24:57.652636Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmiz54ck-svq9v Pod phase: Running • Failure [54.146 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with ACPI and 0 grace period seconds /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:895 should result in vmi status failed [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:896 Timed out after 5.000s. Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:917 ------------------------------ STEP: Creating the VirtualMachineInstance level=info timestamp=2018-07-30T12:27:07.795642Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmiz54ck kind=VirtualMachineInstance uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Created virtual machine pod virt-launcher-testvmiz54ck-svq9v" level=info timestamp=2018-07-30T12:27:23.015429Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmiz54ck kind=VirtualMachineInstance uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmiz54ck-svq9v" level=info timestamp=2018-07-30T12:27:23.988786Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmiz54ck kind=VirtualMachineInstance uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:27:23.994275Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmiz54ck kind=VirtualMachineInstance uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="VirtualMachineInstance started." STEP: Deleting the VirtualMachineInstance STEP: Verifying VirtualMachineInstance's status is Failed 2018/07/30 08:28:50 read closing down: EOF Pod name: disks-images-provider-98jhf Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-lmn24 Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8hx6p Pod phase: Running level=info timestamp=2018-07-30T12:28:27.301764Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:28:28.283112Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:28:28.357536Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:28:30 http: TLS handshake error from 10.244.1.1:49882: EOF 2018/07/30 12:28:40 http: TLS handshake error from 10.244.1.1:49888: EOF 2018/07/30 12:28:50 http: TLS handshake error from 10.244.1.1:49894: EOF level=error timestamp=2018-07-30T12:28:52.247478Z pos=subresource.go:97 component=virt-api reason="websocket: close 1006 (abnormal closure): unexpected EOF" msg="error ecountered reading from websocket stream" level=error timestamp=2018-07-30T12:28:52.247589Z pos=subresource.go:106 component=virt-api reason="websocket: close 1006 (abnormal closure): unexpected EOF" msg="Error in websocket proxy" 2018/07/30 12:28:52 http: response.WriteHeader on hijacked connection level=info timestamp=2018-07-30T12:28:52.247948Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmib7dt5/console proto=HTTP/1.1 statusCode=500 contentLength=0 level=error timestamp=2018-07-30T12:28:53.276298Z pos=subresource.go:91 component=virt-api reason="tls: use of closed connection" msg="error ecountered reading from remote podExec stream" level=info timestamp=2018-07-30T12:28:57.330628Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:28:58.433668Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:28:58.447701Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:29:00 http: TLS handshake error from 10.244.1.1:49900: EOF Pod name: virt-api-7d79764579-mbmt6 Pod phase: Running level=info timestamp=2018-07-30T12:27:23.812142Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" 2018/07/30 12:27:30 http: TLS handshake error from 10.244.0.1:39096: EOF 2018/07/30 12:27:40 http: TLS handshake error from 10.244.0.1:39120: EOF 2018/07/30 12:27:50 http: TLS handshake error from 10.244.0.1:39144: EOF level=error timestamp=2018-07-30T12:27:56.847111Z pos=subresource.go:85 component=virt-api msg= 2018/07/30 12:27:56 http: response.WriteHeader on hijacked connection level=error timestamp=2018-07-30T12:27:56.847442Z pos=subresource.go:97 component=virt-api reason="read tcp 10.244.0.5:8443->10.244.0.1:36768: use of closed network connection" msg="error ecountered reading from websocket stream" level=info timestamp=2018-07-30T12:27:56.847612Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmiz54ck/console proto=HTTP/1.1 statusCode=200 contentLength=0 2018/07/30 12:28:00 http: TLS handshake error from 10.244.0.1:39168: EOF 2018/07/30 12:28:10 http: TLS handshake error from 10.244.0.1:39192: EOF 2018/07/30 12:28:20 http: TLS handshake error from 10.244.0.1:39222: EOF 2018/07/30 12:28:30 http: TLS handshake error from 10.244.0.1:39246: EOF 2018/07/30 12:28:40 http: TLS handshake error from 10.244.0.1:39270: EOF 2018/07/30 12:28:50 http: TLS handshake error from 10.244.0.1:39294: EOF 2018/07/30 12:29:00 http: TLS handshake error from 10.244.0.1:39318: EOF Pod name: virt-controller-7d57d96b65-45jcf Pod phase: Running level=info timestamp=2018-07-30T12:24:52.125134Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwwwdt\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiwwwdt, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8eb69aad-93f3-11e8-b2fe-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwwwdt" level=info timestamp=2018-07-30T12:24:52.302970Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6tdmf kind= uid=8f2920cd-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:24:52.303092Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6tdmf kind= uid=8f2920cd-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:24:52.338519Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6tdmf\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6tdmf" level=info timestamp=2018-07-30T12:24:52.721169Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigbttb kind= uid=8f68f14e-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:24:52.721285Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigbttb kind= uid=8f68f14e-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:24:52.784609Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmigbttb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmigbttb" level=info timestamp=2018-07-30T12:26:00.312275Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminllwv kind= uid=b7b21554-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:26:00.315016Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminllwv kind= uid=b7b21554-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:27:07.469501Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:27:07.469706Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:27:07.555948Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiz54ck\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiz54ck" level=info timestamp=2018-07-30T12:28:01.780113Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:28:01.780254Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:28:01.868556Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmib7dt5\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmib7dt5" Pod name: virt-controller-7d57d96b65-gqln6 Pod phase: Running level=info timestamp=2018-07-30T12:04:36.490365Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-4gnh8 Pod phase: Running level=info timestamp=2018-07-30T12:28:51.278352Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:28:51.278368Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:28:51.278429Z pos=vm.go:344 component=virt-handler namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Shutting down due to graceful shutdown signal." level=info timestamp=2018-07-30T12:28:51.278454Z pos=vm.go:370 component=virt-handler namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-30T12:28:51.278474Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-30T12:28:51.297168Z pos=vm.go:547 component=virt-handler namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Signaled graceful shutdown for testvmib7dt5" level=info timestamp=2018-07-30T12:28:51.297307Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:29:01.297450Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmib7dt5, existing: true\n" level=info timestamp=2018-07-30T12:29:01.297530Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T12:29:01.297556Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:29:01.297581Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:29:01.297644Z pos=vm.go:344 component=virt-handler namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Shutting down due to graceful shutdown signal." level=info timestamp=2018-07-30T12:29:01.297680Z pos=vm.go:370 component=virt-handler namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-30T12:29:01.297711Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-30T12:29:01.298639Z pos=vm.go:556 component=virt-handler namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Grace period expired, killing deleted VirtualMachineInstance testvmib7dt5" Pod name: virt-handler-8t4vh Pod phase: Running level=info timestamp=2018-07-30T12:24:50.353639Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind= uid=6a80b596-93f3-11e8-b2fe-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:24:50.354574Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind= uid=6a80b596-93f3-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:24:50.354760Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmizw672, existing: true\n" level=info timestamp=2018-07-30T12:24:50.354801Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-07-30T12:24:50.354836Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:24:50.354898Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind= uid=6a80b596-93f3-11e8-b2fe-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:24:50.354984Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind= uid=6a80b596-93f3-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:24:50.364120Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmizw672, existing: false\n" level=info timestamp=2018-07-30T12:24:50.364172Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:24:50.364257Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:24:50.364336Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:24:57.652376Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmizw672, existing: false\n" level=info timestamp=2018-07-30T12:24:57.652469Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:24:57.652546Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:24:57.652636Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmizw672 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmib7dt5-gzht2 Pod phase: Running • Failure [64.256 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with ACPI and some grace period seconds /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:920 should result in vmi status succeeded [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:921 Timed out after 15.000s. Expected : Running to equal : Succeeded /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:942 ------------------------------ STEP: Creating the VirtualMachineInstance level=info timestamp=2018-07-30T12:28:02.151635Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmib7dt5 kind=VirtualMachineInstance uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Created virtual machine pod virt-launcher-testvmib7dt5-gzht2" level=info timestamp=2018-07-30T12:28:16.717961Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmib7dt5 kind=VirtualMachineInstance uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmib7dt5-gzht2" level=info timestamp=2018-07-30T12:28:17.735379Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmib7dt5 kind=VirtualMachineInstance uid=00186617-93f4-11e8-b2fe-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:28:17.741413Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmib7dt5 kind=VirtualMachineInstance uid=00186617-93f4-11e8-b2fe-525500d15501 msg="VirtualMachineInstance started." STEP: Deleting the VirtualMachineInstance STEP: Verifying VirtualMachineInstance's status is Succeeded • [SLOW TEST:51.879 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:945 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:946 ------------------------------ Pod name: disks-images-provider-98jhf Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-lmn24 Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-8hx6p Pod phase: Running 2018/07/30 12:30:00 http: TLS handshake error from 10.244.1.1:49936: EOF 2018/07/30 12:30:10 http: TLS handshake error from 10.244.1.1:49942: EOF 2018/07/30 12:30:20 http: TLS handshake error from 10.244.1.1:49948: EOF level=info timestamp=2018-07-30T12:30:27.436701Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:30:28.389272Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:30:28.591904Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:30:30 http: TLS handshake error from 10.244.1.1:49954: EOF 2018/07/30 12:30:40 http: TLS handshake error from 10.244.1.1:49960: EOF 2018/07/30 12:30:50 http: TLS handshake error from 10.244.1.1:49966: EOF level=info timestamp=2018-07-30T12:30:57.460212Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:30:58.364506Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:30:58.626724Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:31:00 http: TLS handshake error from 10.244.1.1:49972: EOF 2018/07/30 12:31:10 http: TLS handshake error from 10.244.1.1:49978: EOF 2018/07/30 12:31:20 http: TLS handshake error from 10.244.1.1:49984: EOF Pod name: virt-api-7d79764579-mbmt6 Pod phase: Running 2018/07/30 12:29:00 http: TLS handshake error from 10.244.0.1:39318: EOF 2018/07/30 12:29:10 http: TLS handshake error from 10.244.0.1:39342: EOF 2018/07/30 12:29:20 http: TLS handshake error from 10.244.0.1:39366: EOF 2018/07/30 12:29:30 http: TLS handshake error from 10.244.0.1:39390: EOF 2018/07/30 12:29:40 http: TLS handshake error from 10.244.0.1:39414: EOF 2018/07/30 12:29:50 http: TLS handshake error from 10.244.0.1:39438: EOF 2018/07/30 12:30:00 http: TLS handshake error from 10.244.0.1:39462: EOF 2018/07/30 12:30:10 http: TLS handshake error from 10.244.0.1:39486: EOF 2018/07/30 12:30:20 http: TLS handshake error from 10.244.0.1:39510: EOF 2018/07/30 12:30:30 http: TLS handshake error from 10.244.0.1:39534: EOF 2018/07/30 12:30:40 http: TLS handshake error from 10.244.0.1:39558: EOF 2018/07/30 12:30:50 http: TLS handshake error from 10.244.0.1:39582: EOF 2018/07/30 12:31:00 http: TLS handshake error from 10.244.0.1:39606: EOF 2018/07/30 12:31:10 http: TLS handshake error from 10.244.0.1:39630: EOF 2018/07/30 12:31:20 http: TLS handshake error from 10.244.0.1:39654: EOF Pod name: virt-controller-7d57d96b65-45jcf Pod phase: Running level=info timestamp=2018-07-30T12:24:52.721169Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigbttb kind= uid=8f68f14e-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:24:52.721285Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigbttb kind= uid=8f68f14e-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:24:52.784609Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmigbttb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmigbttb" level=info timestamp=2018-07-30T12:26:00.312275Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminllwv kind= uid=b7b21554-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:26:00.315016Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminllwv kind= uid=b7b21554-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:27:07.469501Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:27:07.469706Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiz54ck kind= uid=dfb8f942-93f3-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:27:07.555948Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiz54ck\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiz54ck" level=info timestamp=2018-07-30T12:28:01.780113Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:28:01.780254Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib7dt5 kind= uid=00186617-93f4-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:28:01.868556Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmib7dt5\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmib7dt5" level=info timestamp=2018-07-30T12:29:06.043812Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigp6lz kind= uid=2666c514-93f4-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:29:06.043921Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigp6lz kind= uid=2666c514-93f4-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:29:57.921132Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigssld kind= uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:29:57.922274Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigssld kind= uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-gqln6 Pod phase: Running level=info timestamp=2018-07-30T12:04:36.490365Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-4gnh8 Pod phase: Running level=info timestamp=2018-07-30T12:30:14.277171Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmigssld kind=Domain uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T12:30:14.303858Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-30T12:30:14.305189Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmigssld kind= uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:30:14.305231Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmigssld, existing: true\n" level=info timestamp=2018-07-30T12:30:14.305247Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-30T12:30:14.305268Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:30:14.305284Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:30:14.305324Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmigssld kind= uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T12:30:14.319727Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmigssld kind= uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:30:14.320481Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmigssld, existing: true\n" level=info timestamp=2018-07-30T12:30:14.320502Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T12:30:14.320523Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:30:14.320539Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:30:14.320621Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmigssld kind= uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T12:30:14.328868Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmigssld kind= uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-8t4vh Pod phase: Running level=error timestamp=2018-07-30T12:30:17.613559Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmigp6lz kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T12:30:17.613587Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmigp6lz" level=info timestamp=2018-07-30T12:30:20.339089Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmigp6lz kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T12:30:20.339238Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmigp6lz, existing: false\n" level=info timestamp=2018-07-30T12:30:20.339258Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:30:20.339291Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmigp6lz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:30:20.340095Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmigp6lz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:30:20.340179Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmigp6lz, existing: false\n" level=info timestamp=2018-07-30T12:30:20.340195Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:30:20.340226Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmigp6lz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:30:20.340288Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmigp6lz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:30:38.093816Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmigp6lz, existing: false\n" level=info timestamp=2018-07-30T12:30:38.093907Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:30:38.094015Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmigp6lz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:30:38.094118Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmigp6lz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmigssld-5xzkg Pod phase: Running Pod name: vmi-killer7xvt2 Pod phase: Succeeded • Failure [86.658 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:997 should be in Failed phase [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:998 Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:1021 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-07-30T12:29:58.360514Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmigssld kind=VirtualMachineInstance uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Created virtual machine pod virt-launcher-testvmigssld-5xzkg" level=info timestamp=2018-07-30T12:30:13.143715Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmigssld kind=VirtualMachineInstance uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmigssld-5xzkg" level=info timestamp=2018-07-30T12:30:14.131321Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmigssld kind=VirtualMachineInstance uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:30:14.137975Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmigssld kind=VirtualMachineInstance uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="VirtualMachineInstance started." STEP: Killing the VirtualMachineInstance level=info timestamp=2018-07-30T12:30:24.178687Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmigssld kind=VirtualMachineInstance uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Created virtual machine pod virt-launcher-testvmigssld-5xzkg" level=info timestamp=2018-07-30T12:30:24.178812Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmigssld kind=VirtualMachineInstance uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmigssld-5xzkg" level=info timestamp=2018-07-30T12:30:24.179204Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmigssld kind=VirtualMachineInstance uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:30:24.179262Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmigssld kind=VirtualMachineInstance uid=4551d942-93f4-11e8-b2fe-525500d15501 msg="VirtualMachineInstance started." STEP: Checking that the VirtualMachineInstance has 'Failed' phase • [SLOW TEST:80.632 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:997 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:1025 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 should succeed to generate a VM JSON file using oc-process command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:150 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1403 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 should succeed to create a VM using oc-create command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:156 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1403 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 should succeed to launch a VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:161 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1403 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 with given VMI from the VM /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:163 should succeed to terminate the VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:166 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1403 ------------------------------ • [SLOW TEST:16.881 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ •• ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.006 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.015 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ • 2018/07/30 08:33:51 read closing down: EOF ------------------------------ • [SLOW TEST:49.216 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 ------------------------------ • [SLOW TEST:160.538 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 ------------------------------ 2018/07/30 08:36:32 read closing down: EOF panic: test timed out after 1h30m0s goroutine 8161 [running]: testing.(*M).startAlarm.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1240 +0xfc created by time.goFunc /gimme/.gimme/versions/go1.10.linux.amd64/src/time/sleep.go:172 +0x44 goroutine 1 [chan receive, 90 minutes]: testing.(*T).Run(0xc42072ac30, 0x139e76f, 0x9, 0x1430ca0, 0x4801e6) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:825 +0x301 testing.runTests.func1(0xc42072ab40) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1063 +0x64 testing.tRunner(0xc42072ab40, 0xc42052fdf8) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 testing.runTests(0xc4207f7b40, 0x1d32a50, 0x1, 0x1, 0x412009) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1061 +0x2c4 testing.(*M).Run(0xc420770480, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:978 +0x171 main.main() _testmain.go:44 +0x151 goroutine 5 [chan receive]: kubevirt.io/kubevirt/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x1d5e280) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:879 +0x8b created by kubevirt.io/kubevirt/vendor/github.com/golang/glog.init.0 /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:410 +0x203 goroutine 7 [syscall, 90 minutes]: os/signal.signal_recv(0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/sigqueue.go:139 +0xa6 os/signal.loop() /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:22 +0x22 created by os/signal.init.0 /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:28 +0x41 goroutine 11 [select]: kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).ExpectSwitchCase(0xc420812960, 0xc420754d80, 0x1, 0x1, 0x1bf08eb000, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:728 +0xc7d kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).Expect(0xc420812960, 0xc420738500, 0x1bf08eb000, 0x0, 0x0, 0x1432fc0, 0xc42000b500, 0x5c, 0xc420754df8, 0xc420754e30) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1123 +0xdb kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).ExpectBatch(0xc420812960, 0xc420418770, 0x1, 0x1, 0x1bf08eb000, 0x0, 0xc420296a01, 0x14d35c0, 0xc420812960, 0xc420453680) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:565 +0x4f7 kubevirt.io/kubevirt/tests_test.glob..func20.2(0xc420782c80, 0xc420418770, 0x1, 0x1, 0x1bf08eb000) /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:71 +0x222 kubevirt.io/kubevirt/tests_test.glob..func20.4.2.1() /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:144 +0x414 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc420724060, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0x9c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc420724060, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x13e kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc420726020, 0x14b6d40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7f kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc42094e0f0, 0x0, 0x14b6d40, 0xc420055500) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:203 +0x648 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc42094e0f0, 0x14b6d40, 0xc420055500) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xff kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc42074db80, 0xc42094e0f0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10d kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc42074db80, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x329 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc42074db80, 0xb) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x11b kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc4200ccb40, 0x7f31737c16f8, 0xc42072ac30, 0x13a0d52, 0xb, 0xc4207f7b80, 0x2, 0x2, 0x14d3600, 0xc420055500, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x27c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x14b7da0, 0xc42072ac30, 0x13a0d52, 0xb, 0xc4207f7b60, 0x2, 0x2, 0x2) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:221 +0x258 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x14b7da0, 0xc42072ac30, 0x13a0d52, 0xb, 0xc42050b850, 0x1, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:209 +0xab kubevirt.io/kubevirt/tests_test.TestTests(0xc42072ac30) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc42072ac30, 0x1430ca0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:824 +0x2e0 goroutine 12 [chan receive, 90 minutes]: kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).registerForInterrupts(0xc42074db80, 0xc4200be180) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:223 +0xd1 created by kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:60 +0x88 goroutine 13 [select, 90 minutes, locked to thread]: runtime.gopark(0x1432e78, 0x0, 0x139b291, 0x6, 0x18, 0x1) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/proc.go:291 +0x11a runtime.selectgo(0xc420479f50, 0xc4200be2a0) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/select.go:392 +0xe50 runtime.ensureSigM.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/signal_unix.go:549 +0x1f4 runtime.goexit() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/asm_amd64.s:2361 +0x1 goroutine 51 [IO wait]: internal/poll.runtime_pollWait(0x7f317379cf00, 0x72, 0xc420929850) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/netpoll.go:173 +0x57 internal/poll.(*pollDesc).wait(0xc420409098, 0x72, 0xffffffffffffff00, 0x14b8f60, 0x1c497d0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:85 +0x9b internal/poll.(*pollDesc).waitRead(0xc420409098, 0xc420800000, 0x8000, 0x8000) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:90 +0x3d internal/poll.(*FD).Read(0xc420409080, 0xc420800000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_unix.go:157 +0x17d net.(*netFD).Read(0xc420409080, 0xc420800000, 0x8000, 0x8000, 0x0, 0x8, 0x7ffb) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/fd_unix.go:202 +0x4f net.(*conn).Read(0xc42000e928, 0xc420800000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/net.go:176 +0x6a crypto/tls.(*block).readFromUntil(0xc42051a750, 0x7f31737c1918, 0xc42000e928, 0x5, 0xc42000e928, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:493 +0x96 crypto/tls.(*Conn).readRecord(0xc4200d7880, 0x1432f17, 0xc4200d79a0, 0x20) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:595 +0xe0 crypto/tls.(*Conn).Read(0xc4200d7880, 0xc42065c000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:1156 +0x100 bufio.(*Reader).Read(0xc420667320, 0xc4207ce2d8, 0x9, 0x9, 0xc4204522f8, 0xc420913da0, 0xc420929d10) /gimme/.gimme/versions/go1.10.linux.amd64/src/bufio/bufio.go:216 +0x238 io.ReadAtLeast(0x14b5b40, 0xc420667320, 0xc4207ce2d8, 0x9, 0x9, 0x9, 0xc420929ce0, 0xc420929ce0, 0x406614) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:309 +0x86 io.ReadFull(0x14b5b40, 0xc420667320, 0xc4207ce2d8, 0x9, 0x9, 0xc4204522a0, 0xc420929d10, 0xc400003301) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:327 +0x58 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.readFrameHeader(0xc4207ce2d8, 0x9, 0x9, 0x14b5b40, 0xc420667320, 0x0, 0xc400000000, 0x7ef9ad, 0xc420929fb0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:237 +0x7b kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc4207ce2a0, 0xc4208080c0, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:492 +0xa4 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc420929fb0, 0x1431bf8, 0xc4200737b0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1428 +0x8e kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc4200d4d00) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1354 +0x76 created by kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Transport).newClientConn /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:579 +0x651 goroutine 4424 [chan send, 44 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4209de570) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 8201 [IO wait]: internal/poll.runtime_pollWait(0x7f317379cc90, 0x72, 0xc420084880) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/netpoll.go:173 +0x57 internal/poll.(*pollDesc).wait(0xc4208f1d98, 0x72, 0xffffffffffffff00, 0x14b8f60, 0x1c497d0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:85 +0x9b internal/poll.(*pollDesc).waitRead(0xc4208f1d98, 0xc420c09c00, 0x400, 0x400) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:90 +0x3d internal/poll.(*FD).Read(0xc4208f1d80, 0xc420c09c00, 0x400, 0x400, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_unix.go:157 +0x17d net.(*netFD).Read(0xc4208f1d80, 0xc420c09c00, 0x400, 0x400, 0x3, 0xc420084ad0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/fd_unix.go:202 +0x4f net.(*conn).Read(0xc42000f2a8, 0xc420c09c00, 0x400, 0x400, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/net.go:176 +0x6a crypto/tls.(*block).readFromUntil(0xc420808630, 0x7f31737c1918, 0xc42000f2a8, 0x5, 0xc42000f2a8, 0xc420058500) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:493 +0x96 crypto/tls.(*Conn).readRecord(0xc420822000, 0x1432f17, 0xc420822120, 0x3) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:595 +0xe0 crypto/tls.(*Conn).Read(0xc420822000, 0xc420ad4000, 0x2800, 0x2800, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:1156 +0x100 bufio.(*Reader).fill(0xc4204535c0) /gimme/.gimme/versions/go1.10.linux.amd64/src/bufio/bufio.go:100 +0x11e bufio.(*Reader).Peek(0xc4204535c0, 0x2, 0xc420b73cc8, 0x473a70, 0x47164f, 0xc420b73d88, 0x3) /gimme/.gimme/versions/go1.10.linux.amd64/src/bufio/bufio.go:132 +0x3a kubevirt.io/kubevirt/vendor/github.com/gorilla/websocket.(*Conn).read(0xc4203a7a40, 0x2, 0x1430b80, 0xc420b73d20, 0xc4208d40c0, 0xc420b73d10, 0x4887f2) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/gorilla/websocket/conn_read.go:12 +0x40 kubevirt.io/kubevirt/vendor/github.com/gorilla/websocket.(*Conn).advanceFrame(0xc4203a7a40, 0x8000, 0xc420cfe000, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/gorilla/websocket/conn.go:780 +0x5c kubevirt.io/kubevirt/vendor/github.com/gorilla/websocket.(*Conn).NextReader(0xc4203a7a40, 0xc420b73e78, 0x443a47, 0x8000, 0x11a0e20, 0xc420b73f01) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/gorilla/websocket/conn.go:940 +0xa3 kubevirt.io/kubevirt/pkg/kubecli.(*BinaryReadWriter).Read(0xc420b04270, 0xc420cfe000, 0x8000, 0x8000, 0x8000, 0x8000, 0x40f15d) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:102 +0x32 io.copyBuffer(0x14b5f40, 0xc42000f2a0, 0x14b5fe0, 0xc420b04270, 0xc420cfe000, 0x8000, 0x8000, 0x410558, 0x1221960, 0x1276c40) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x164 io.Copy(0x14b5f40, 0xc42000f2a0, 0x14b5fe0, 0xc420b04270, 0xc4209d82e0, 0xc420b73fc0, 0x7e5620) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/kubecli.(*wsStreamer).Stream.func2(0x14b5f20, 0xc42000f270, 0x14b5f40, 0xc42000f2a0, 0xc420b04270, 0xc420860a20) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:238 +0x4b created by kubevirt.io/kubevirt/pkg/kubecli.(*wsStreamer).Stream /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:237 +0x137 goroutine 8180 [select]: kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).waitForSession.func1(0xc420b9e010, 0xc4200be180, 0xc420812960, 0x14ba820, 0xc42000f278) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1001 +0x109 created by kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).waitForSession /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:998 +0xc3 goroutine 4952 [chan send, 37 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc42038d710) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 187 [chan send, 88 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4207ed830) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 8200 [select]: io.(*pipe).Read(0xc4207f3720, 0xc420c64000, 0x8000, 0x8000, 0x11a0e20, 0xc420a87701, 0xc420c64000) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/pipe.go:50 +0x115 io.(*PipeReader).Read(0xc42000f270, 0xc420c64000, 0x8000, 0x8000, 0x8000, 0x8000, 0x40f15d) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/pipe.go:127 +0x4c io.copyBuffer(0x14b6000, 0xc420b04270, 0x14b5f20, 0xc42000f270, 0xc420c64000, 0x8000, 0x8000, 0x410558, 0x1221960, 0x1276c40) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x164 io.Copy(0x14b6000, 0xc420b04270, 0x14b5f20, 0xc42000f270, 0xc4208b4800, 0x7f31737c1a70, 0xc4208b4800) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/kubecli.(*wsStreamer).Stream.func1(0xc420b04270, 0x14b5f20, 0xc42000f270, 0x14b5f40, 0xc42000f2a0, 0xc420860a20) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:233 +0x4b created by kubevirt.io/kubevirt/pkg/kubecli.(*wsStreamer).Stream /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:232 +0xda goroutine 8181 [select]: io.(*pipe).Read(0xc4207f37c0, 0xc421130000, 0x2000, 0x2000, 0x11a0e20, 0xc420bd6a01, 0xc421130000) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/pipe.go:50 +0x115 io.(*PipeReader).Read(0xc42000f280, 0xc421130000, 0x2000, 0x2000, 0x2000, 0x2000, 0x105a57a) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/pipe.go:127 +0x4c kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).waitForSession.func2(0x14b5f20, 0xc42000f280) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1020 +0xdb created by kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).waitForSession /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1039 +0x154 goroutine 3852 [chan send, 49 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4207c3830) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 3615 [chan receive, 52 minutes]: kubevirt.io/kubevirt/pkg/kubecli.(*asyncWSRoundTripper).WebsocketCallback(0xc420613a70, 0xc42052d040, 0xc420126f30, 0x0, 0x0, 0x18, 0xc420751ec8) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:163 +0x32b kubevirt.io/kubevirt/pkg/kubecli.(*asyncWSRoundTripper).WebsocketCallback-fm(0xc42052d040, 0xc420126f30, 0x0, 0x0, 0xc42052d040, 0xc420126f30) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:313 +0x52 kubevirt.io/kubevirt/pkg/kubecli.(*WebsocketRoundTripper).RoundTrip(0xc42080a190, 0xc42056d600, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:142 +0xab kubevirt.io/kubevirt/pkg/kubecli.(*vmis).asyncSubresourceHelper.func1(0x14b6020, 0xc42080a190, 0xc42056d600, 0xc420678720) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:328 +0x56 created by kubevirt.io/kubevirt/pkg/kubecli.(*vmis).asyncSubresourceHelper /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:326 +0x33a goroutine 8049 [chan send, 4 minutes]: kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1.1(0x14f1300, 0xc4203268c0, 0xc42000e7d8, 0xc4205631a0, 0xc4209f4ec8, 0xc4209f4f08) /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:81 +0x138 created by kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1 /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:73 +0x386 goroutine 8158 [chan receive]: kubevirt.io/kubevirt/pkg/kubecli.(*asyncWSRoundTripper).WebsocketCallback(0xc4204187b0, 0xc4203a7a40, 0xc420d1a630, 0x0, 0x0, 0x18, 0xc421135ec8) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:163 +0x32b kubevirt.io/kubevirt/pkg/kubecli.(*asyncWSRoundTripper).WebsocketCallback-fm(0xc4203a7a40, 0xc420d1a630, 0x0, 0x0, 0xc4203a7a40, 0xc420d1a630) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:313 +0x52 kubevirt.io/kubevirt/pkg/kubecli.(*WebsocketRoundTripper).RoundTrip(0xc420418df0, 0xc420b49500, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:142 +0xab kubevirt.io/kubevirt/pkg/kubecli.(*vmis).asyncSubresourceHelper.func1(0x14b6020, 0xc420418df0, 0xc420b49500, 0xc420452660) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:328 +0x56 created by kubevirt.io/kubevirt/pkg/kubecli.(*vmis).asyncSubresourceHelper /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:326 +0x33a goroutine 8160 [chan receive]: kubevirt.io/kubevirt/tests.NewConsoleExpecter.func2(0xc400000010, 0xc420286100) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1172 +0x3c kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).waitForSession(0xc420812960, 0xc420453680, 0xc420419c30, 0x14ba820, 0xc42000f278, 0x14b5f20, 0xc42000f280, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1044 +0x16e created by kubevirt.io/kubevirt/vendor/github.com/google/goexpect.SpawnGeneric /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:805 +0x299 goroutine 5106 [chan send, 36 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4207c8f60) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 8159 [chan receive]: kubevirt.io/kubevirt/pkg/kubecli.(*wsStreamer).Stream(0xc420419c00, 0x14b5f20, 0xc42000f270, 0x14b5f40, 0xc42000f2a0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:243 +0x17d kubevirt.io/kubevirt/tests.NewConsoleExpecter.func1(0xc420563800, 0x14b6060, 0xc420419c00, 0xc42000f270, 0xc42000f2a0) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1162 +0x61 created by kubevirt.io/kubevirt/tests.NewConsoleExpecter /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1161 +0x401 make: *** [functest] Error 2 + make cluster-down ./cluster/down.sh