+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev + [[ k8s-1.11.0-dev =~ openshift-.* ]] + [[ k8s-1.11.0-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/30 10:55:35 Waiting for host: 192.168.66.101:22 2018/07/30 10:55:38 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/30 10:55:46 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/30 10:55:51 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 28.007518 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:4269df83318f4f619c696a314e8151c616cb742395f254ab2bd97c1717fe5d29 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/30 10:56:36 Waiting for host: 192.168.66.102:22 2018/07/30 10:56:39 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/30 10:56:47 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/30 10:56:52 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 46s v1.10.3 node02 Ready 16s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 48s v1.10.3 node02 Ready 18s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.39 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> b00c84523b53 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> b76b8bd8cd39 Step 5/8 : USER 1001 ---> Using cache ---> b6d9ad9ed232 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 7fc60d36e10b Removing intermediate container 1aa90446782b Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 7d394cb18853 ---> 5e0c9cbdb1e7 Removing intermediate container 7d394cb18853 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-dev1" '' "virt-controller" '' ---> Running in d52e129157a6 ---> 9eb5dea07076 Removing intermediate container d52e129157a6 Successfully built 9eb5dea07076 Sending build context to Docker daemon 43.32 MB Step 1/9 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 945996802736 Step 3/9 : RUN dnf -y install socat genisoimage && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Running in c8ca3bd5269b Fedora 28 - x86_64 - Updates 4.1 MB/s | 20 MB 00:05 Virtualization packages from Rawhide built for 175 kB/s | 57 kB 00:00 Fedora 28 - x86_64 1.3 MB/s | 60 MB 00:48 Last metadata expiration check: 0:00:00 ago on Mon Jul 30 11:02:46 2018. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: genisoimage x86_64 1.1.11-38.fc28 fedora 315 k socat x86_64 1.7.3.2-6.fc28 fedora 297 k Installing dependencies: libusal x86_64 1.1.11-38.fc28 fedora 144 k Transaction Summary ================================================================================ Install 3 Packages Total download size: 755 k Installed size: 2.7 M Downloading Packages: (1/3): libusal-1.1.11-38.fc28.x86_64.rpm 691 kB/s | 144 kB 00:00 (2/3): genisoimage-1.1.11-38.fc28.x86_64.rpm 1.1 MB/s | 315 kB 00:00 (3/3): socat-1.7.3.2-6.fc28.x86_64.rpm 976 kB/s | 297 kB 00:00 -------------------------------------------------------------------------------- Total 1.1 MB/s | 755 kB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : libusal-1.1.11-38.fc28.x86_64 1/3 Running scriptlet: libusal-1.1.11-38.fc28.x86_64 1/3 Installing : genisoimage-1.1.11-38.fc28.x86_64 2/3 Running scriptlet: genisoimage-1.1.11-38.fc28.x86_64 2/3 Installing : socat-1.7.3.2-6.fc28.x86_64 3/3 Running scriptlet: socat-1.7.3.2-6.fc28.x86_64 3/3 Verifying : socat-1.7.3.2-6.fc28.x86_64 1/3 Verifying : genisoimage-1.1.11-38.fc28.x86_64 2/3 Verifying : libusal-1.1.11-38.fc28.x86_64 3/3 Installed: genisoimage.x86_64 1.1.11-38.fc28 socat.x86_64 1.7.3.2-6.fc28 libusal.x86_64 1.1.11-38.fc28 Complete! 23 files removed ---> 1dcd22d08d0e Removing intermediate container c8ca3bd5269b Step 4/9 : COPY virt-launcher /usr/bin/virt-launcher ---> d65f299e8b01 Removing intermediate container 4bc560bd0fe3 Step 5/9 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 0f42f149178c  ---> bdc83edd1a65 Removing intermediate container 0f42f149178c Step 6/9 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 9435987a7dc9  ---> 5beec434512d Removing intermediate container 9435987a7dc9 Step 7/9 : COPY sock-connector /usr/share/kubevirt/virt-launcher/ ---> 7f783bb35ed1 Removing intermediate container b8cb31bc22c1 Step 8/9 : ENTRYPOINT /usr/bin/virt-launcher ---> Running in ad7bdebaeb92 ---> 0be4caef1c9b Removing intermediate container ad7bdebaeb92 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.11.0-dev1" '' "virt-launcher" '' ---> Running in e1b5c26237b2 ---> 9d54d03e84b7 Removing intermediate container e1b5c26237b2 Successfully built 9d54d03e84b7 Sending build context to Docker daemon 41.69 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 07ed09969ccb Removing intermediate container 936ef4c3adce Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 492395130af9 ---> 6a3c040e81a0 Removing intermediate container 492395130af9 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.11.0-dev1" '' "virt-handler" '' ---> Running in 8228e66ea427 ---> ae7d7ca3cfc4 Removing intermediate container 8228e66ea427 Successfully built ae7d7ca3cfc4 Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> ed1ebf600ee1 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 0769dad023e5 Step 5/8 : USER 1001 ---> Using cache ---> 0cb65afb0c2b Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 79152ebfffb5 Removing intermediate container a1e8ef102ad9 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 3145da48c7a9 ---> 1a1e9954d674 Removing intermediate container 3145da48c7a9 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-dev1" '' "virt-api" '' ---> Running in 667082637e4b ---> 2f95d5017a87 Removing intermediate container 667082637e4b Successfully built 2f95d5017a87 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/7 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 02134835a6aa Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> ec0843818da7 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 754029bb4bd2 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.11.0-dev1" '' ---> Using cache ---> 6ddf54cf0c4f Successfully built 6ddf54cf0c4f Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/5 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 207487abe7b2 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.11.0-dev1" '' "vm-killer" '' ---> Using cache ---> 341b240363c4 Successfully built 341b240363c4 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 5734d749eb5c Step 3/7 : ENV container docker ---> Using cache ---> f8775a77966f Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 1a40cf222a61 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 77b545d92fe7 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> dfe20d463305 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.11.0-dev1" '' "registry-disk-v1alpha" '' ---> Using cache ---> 6fc8c58a6351 Successfully built 6fc8c58a6351 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33150/kubevirt/registry-disk-v1alpha:devel ---> 6fc8c58a6351 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 8dfc212b5bba Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> ee012a932c13 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-dev1" '' ---> Using cache ---> d805e1e5fb07 Successfully built d805e1e5fb07 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33150/kubevirt/registry-disk-v1alpha:devel ---> 6fc8c58a6351 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2ce3aa5ed287 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 5123c3256304 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-dev1" '' ---> Using cache ---> 176931053a22 Successfully built 176931053a22 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33150/kubevirt/registry-disk-v1alpha:devel ---> 6fc8c58a6351 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2ce3aa5ed287 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 4b736400c447 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-dev1" '' ---> Using cache ---> fa8e68df1445 Successfully built fa8e68df1445 Sending build context to Docker daemon 35.59 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 985fe391c056 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 3b2cae8ac543 Step 5/8 : USER 1001 ---> Using cache ---> 0c06e5b4a900 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 61d492d8c3d6 Removing intermediate container 9c6825a975be Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 554a1dd4949d ---> 41d6421dbf11 Removing intermediate container 554a1dd4949d Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-dev1" '' "subresource-access-test" '' ---> Running in c27d65cb3a0c ---> e0fc465da724 Removing intermediate container c27d65cb3a0c Successfully built e0fc465da724 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/9 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> d3456b1644b1 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 0ba81fddbba1 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 5d33abe3f819 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 783826523be1 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 711bc8d15952 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.11.0-dev1" '' "winrmcli" '' ---> Using cache ---> e3070cedeaf2 Successfully built e3070cedeaf2 Sending build context to Docker daemon 36.8 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> e3238544ad97 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 446d7f755efc Removing intermediate container ea3c18720df6 Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 1ad7a9ae9e51 ---> 371c4e5535b8 Removing intermediate container 1ad7a9ae9e51 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.11.0-dev1" '' ---> Running in e23c54881818 ---> b00be18caab9 Removing intermediate container e23c54881818 Successfully built b00be18caab9 hack/build-docker.sh push The push refers to a repository [localhost:33150/kubevirt/virt-controller] 145891fc7933: Preparing aa89340cf7a8: Preparing 891e1e4ef82a: Preparing aa89340cf7a8: Pushed 145891fc7933: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:d9ee7bae26a32427a86777f80b90c922b23103868aca9b4a128b547a1411d08e size: 949 The push refers to a repository [localhost:33150/kubevirt/virt-launcher] 12c2e8b9d38b: Preparing 4c25ce12781d: Preparing a66dcf4eaaba: Preparing 5c9b9fcd71fe: Preparing af293cb2890d: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing b83399358a92: Waiting 5eefb9960a36: Preparing 186d8b3e4fd8: Waiting 891e1e4ef82a: Preparing fa6154170bf5: Waiting 891e1e4ef82a: Waiting 5eefb9960a36: Waiting da38cf808aa5: Waiting 4c25ce12781d: Pushed 12c2e8b9d38b: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller a66dcf4eaaba: Pushed af293cb2890d: Pushed 5c9b9fcd71fe: Pushed 5eefb9960a36: Pushed devel: digest: sha256:7430bcab665f8dec83385cef8b9b89b891933c60c04b43efbc12b48f45f69c28 size: 2620 The push refers to a repository [localhost:33150/kubevirt/virt-handler] 61defb246192: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 61defb246192: Pushed devel: digest: sha256:bcbea7bd80391567a80ce38c392c8cfeb1851d8e90ca7394346d72796b6b2efe size: 741 The push refers to a repository [localhost:33150/kubevirt/virt-api] 3415c1254498: Preparing 82fc744c99b4: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 82fc744c99b4: Pushed 3415c1254498: Pushed devel: digest: sha256:58a18ceed507b3bddc01a5eb09854768f4bc21b6cfe38398690dc426b57b5e09 size: 948 The push refers to a repository [localhost:33150/kubevirt/disks-images-provider] 71ad31feb2c5: Preparing 21d4b721776e: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 71ad31feb2c5: Pushed 21d4b721776e: Pushed devel: digest: sha256:c5081b469e8aa63edb687b511d3d6b11d6bbd7cd65c608bb72236931dd71ad49 size: 948 The push refers to a repository [localhost:33150/kubevirt/vm-killer] c4cfadeeaf5f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider c4cfadeeaf5f: Pushed devel: digest: sha256:dbf35cef10ba35142b90459ba5b308d9adf5d29d2a736aaadf0f27cdad2394f5 size: 740 The push refers to a repository [localhost:33150/kubevirt/registry-disk-v1alpha] 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 661cce8d8e52: Pushed 41e0baba3077: Pushed 25edbec0eaea: Pushed devel: digest: sha256:cceb147084731bb8e5367dc5c744bd810cc05c0e74b2a90a873aab9343ab4ae4 size: 948 The push refers to a repository [localhost:33150/kubevirt/cirros-registry-disk-demo] 0f126af80aad: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 41e0baba3077: Mounted from kubevirt/registry-disk-v1alpha 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 661cce8d8e52: Mounted from kubevirt/registry-disk-v1alpha 0f126af80aad: Pushed devel: digest: sha256:987c3312860d4df7effbdfa03d749fef483d81616205946787ba8dadec2df64d size: 1160 The push refers to a repository [localhost:33150/kubevirt/fedora-cloud-registry-disk-demo] 48f39ecdfd3f: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 661cce8d8e52: Mounted from kubevirt/cirros-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo 41e0baba3077: Mounted from kubevirt/cirros-registry-disk-demo 48f39ecdfd3f: Pushed devel: digest: sha256:994817c92605ae09a2f9ed964d7f3f365edde5cf229e47524ddd7b98cd116510 size: 1161 The push refers to a repository [localhost:33150/kubevirt/alpine-registry-disk-demo] 3f93336c6336: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 661cce8d8e52: Mounted from kubevirt/fedora-cloud-registry-disk-demo 41e0baba3077: Mounted from kubevirt/fedora-cloud-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo 3f93336c6336: Pushed devel: digest: sha256:4752131857024014b0ea6f7a8affa1cdd20783aa89111080802807748abcd97a size: 1160 The push refers to a repository [localhost:33150/kubevirt/subresource-access-test] 1056c728e55f: Preparing 25cb73590a9d: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 25cb73590a9d: Pushed 1056c728e55f: Pushed devel: digest: sha256:a378796a4b621ec98bdb22b581a930f40c7fc4f44149dd20018db5125696eab2 size: 948 The push refers to a repository [localhost:33150/kubevirt/winrmcli] f8083e002d0b: Preparing 53c709abc882: Preparing 9ca98a0f492b: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test f8083e002d0b: Pushed 9ca98a0f492b: Pushed 53c709abc882: Pushed devel: digest: sha256:7246329262fc861f14650480022501b02114c4ca8770bffc24235169183e50c8 size: 1165 The push refers to a repository [localhost:33150/kubevirt/example-hook-sidecar] 2fac798039d3: Preparing 39bae602f753: Preparing 2fac798039d3: Pushed 39bae602f753: Pushed devel: digest: sha256:7385b5f3e67ca053fefb55482538269da41ace99cb93afd290eecd5dd50cecb2 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.11.0-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.11.0-dev1 ++ job_prefix=kubevirt-functional-tests-k8s-1.11.0-dev1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-157-gc9627f2 ++ KUBEVIRT_VERSION=v0.7.0-157-gc9627f2 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33150/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.11.0-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.11.0-dev1 ++ job_prefix=kubevirt-functional-tests-k8s-1.11.0-dev1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-157-gc9627f2 ++ KUBEVIRT_VERSION=v0.7.0-157-gc9627f2 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33150/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.11.0-dev ]] + [[ k8s-1.11.0-dev =~ .*-dev ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created service "virt-api" created deployment.extensions "virt-api" created service "virt-controller" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7586947775-lfhnr 0/1 ContainerCreating 0 7s virt-handler-7qnlr 0/1 ContainerCreating 0 7s virt-handler-db5hm 0/1 ContainerCreating 0 7s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-qctjs 0/1 ContainerCreating 0 2s disks-images-provider-t6r7h 0/1 ContainerCreating 0 2s virt-api-7586947775-lfhnr 0/1 ContainerCreating 0 8s virt-handler-db5hm 0/1 ContainerCreating 0 8s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-qctjs 1/1 Running 0 1m disks-images-provider-t6r7h 1/1 Running 0 1m etcd-node01 1/1 Running 0 16m kube-apiserver-node01 1/1 Running 0 16m kube-controller-manager-node01 1/1 Running 0 16m kube-dns-86f4d74b45-8qx65 3/3 Running 0 16m kube-flannel-ds-4nj96 1/1 Running 0 16m kube-flannel-ds-7hx4q 1/1 Running 0 16m kube-proxy-9wjw7 1/1 Running 0 16m kube-proxy-fr6m4 1/1 Running 0 16m kube-scheduler-node01 1/1 Running 0 15m virt-api-7586947775-lfhnr 1/1 Running 0 1m virt-controller-7d57d96b65-8gsdc 1/1 Running 0 1m virt-controller-7d57d96b65-lkdkn 1/1 Running 0 1m virt-handler-7qnlr 1/1 Running 0 1m virt-handler-db5hm 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ grep false ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/junit.xml' + [[ k8s-1.11.0-dev =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-dev/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532949334 Will run 151 of 151 specs 2018/07/30 07:16:24 read closing down: EOF Service cluster-ip-vmi successfully exposed for virtualmachineinstance testvmitnlwg • [SLOW TEST:53.304 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service cluster-ip-target-vmi successfully exposed for virtualmachineinstance testvmitnlwg •Service node-port-vmi successfully exposed for virtualmachineinstance testvmitnlwg ------------------------------ • [SLOW TEST:9.549 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:124 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:129 ------------------------------ 2018/07/30 07:17:27 read closing down: EOF Service cluster-ip-udp-vmi successfully exposed for virtualmachineinstance testvmijc5zp • [SLOW TEST:51.901 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:173 Should expose a ClusterIP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:177 ------------------------------ Service node-port-udp-vmi successfully exposed for virtualmachineinstance testvmijc5zp • [SLOW TEST:8.336 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:205 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:210 ------------------------------ 2018/07/30 07:18:21 read closing down: EOF 2018/07/30 07:18:32 read closing down: EOF Service cluster-ip-vmirs successfully exposed for vmirs replicasetxf7v5 • [SLOW TEST:56.633 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VMI replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:253 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:286 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:290 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachine testvmi4dkbh VM testvmi4dkbh was scheduled to start 2018/07/30 07:19:20 read closing down: EOF • [SLOW TEST:49.023 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:318 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:362 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:363 ------------------------------ •• ------------------------------ • [SLOW TEST:18.363 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ •• ------------------------------ • [SLOW TEST:82.826 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:86.839 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:92.816 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running 2018/07/30 11:29:20 http: TLS handshake error from 10.244.1.1:51962: EOF level=info timestamp=2018-07-30T11:29:29.162422Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:29:29.258096Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:29:30 http: TLS handshake error from 10.244.1.1:51968: EOF level=info timestamp=2018-07-30T11:29:31.519635Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:29:37.425198Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:29:37.428843Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:29:40 http: TLS handshake error from 10.244.1.1:51974: EOF 2018/07/30 11:29:50 http: TLS handshake error from 10.244.1.1:51980: EOF level=info timestamp=2018-07-30T11:29:59.162488Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:29:59.435914Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:30:00 http: TLS handshake error from 10.244.1.1:51986: EOF level=info timestamp=2018-07-30T11:30:01.647742Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:30:10 http: TLS handshake error from 10.244.1.1:51992: EOF 2018/07/30 11:30:20 http: TLS handshake error from 10.244.1.1:51998: EOF Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T11:25:20.609886Z pos=vm.go:377 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Setting stabile UUID '76115ecd-c2f8-5c60-a007-4aa78d689c82' (was '')" level=info timestamp=2018-07-30T11:25:20.676023Z pos=vm.go:459 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=3e46a185-93eb-11e8-a6c7-525500d15501 msg="Looking for VirtualMachineInstance Ref" level=info timestamp=2018-07-30T11:25:20.678646Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=3e46a185-93eb-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:25:20.678891Z pos=vm.go:470 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=3e46a185-93eb-11e8-a6c7-525500d15501 msg="VirtualMachineInstance created bacause testvmi6wwbf was added." level=info timestamp=2018-07-30T11:25:20.678991Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=3e46a185-93eb-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:25:20.679016Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:25:20.680540Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:25:20.720634Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:25:20.720827Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:25:20.737859Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:25:20.738053Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:25:20.846048Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:25:20.846288Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:25:20.884950Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:25:20.885073Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=151ea674-93eb-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" Pod name: virt-handler-7qnlr Pod phase: Running level=info timestamp=2018-07-30T11:25:20.942740Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=1523d4b8-93eb-11e8-a6c7-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:25:20.943250Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=1523d4b8-93eb-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:25:20.943874Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi6wwbf, existing: true\n" level=info timestamp=2018-07-30T11:25:20.944028Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-07-30T11:25:20.944106Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:25:20.944266Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=1523d4b8-93eb-11e8-a6c7-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:25:20.944554Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6wwbf kind= uid=1523d4b8-93eb-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:25:20.999269Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi6wwbf, existing: false\n" level=info timestamp=2018-07-30T11:25:20.999427Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:25:20.999653Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6wwbf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:25:20.999826Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6wwbf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:25:21.355164Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi6wwbf, existing: false\n" level=info timestamp=2018-07-30T11:25:21.355362Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:25:21.355655Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6wwbf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:25:21.355882Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6wwbf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-db5hm Pod phase: Running level=info timestamp=2018-07-30T11:19:51.985923Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:19:51.986038Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi5cm6mbrb5w kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:19:51.986323Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi5cm6mbrb5w kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:19:51.986431Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijc5zp, existing: false\n" level=info timestamp=2018-07-30T11:19:51.986474Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:19:51.986547Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijc5zp kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:19:51.986681Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijc5zp kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:20:09.038893Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi5cm6mbrb5w, existing: false\n" level=info timestamp=2018-07-30T11:20:09.042394Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:20:09.042902Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi5cm6mbrb5w kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:20:09.044680Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi5cm6mbrb5w kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:20:09.045222Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijc5zp, existing: false\n" level=info timestamp=2018-07-30T11:20:09.045362Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:20:09.045528Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijc5zp kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:20:09.045941Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijc5zp kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmi6wwbf-ktg7s Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 7 [running]: io.copyBuffer(0x142d000, 0xc42000e018, 0x0, 0x0, 0xc421952000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc42000e018, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func1(0xc42046c160, 0xc4200cc600) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:264 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:261 +0x15f • Failure [370.663 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 Timed out after 300.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:157 ------------------------------ STEP: Doing run: 0 STEP: Starting the VirtualMachineInstance STEP: VMI has the running condition STEP: Stopping the VirtualMachineInstance STEP: VMI has not the running condition STEP: Doing run: 1 STEP: Starting the VirtualMachineInstance STEP: VMI has the running condition • [SLOW TEST:135.078 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running level=info timestamp=2018-07-30T11:36:29.017433Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:36:29.261979Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:36:30 http: TLS handshake error from 10.244.1.1:52222: EOF level=info timestamp=2018-07-30T11:36:31.052345Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:36:33.422126Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:36:36.840775Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:36:36.844717Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:36:40 http: TLS handshake error from 10.244.1.1:52228: EOF 2018/07/30 11:36:50 http: TLS handshake error from 10.244.1.1:52234: EOF level=info timestamp=2018-07-30T11:36:59.319289Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:37:00 http: TLS handshake error from 10.244.1.1:52240: EOF level=info timestamp=2018-07-30T11:37:01.182331Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:37:03.569018Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:37:10 http: TLS handshake error from 10.244.1.1:52246: EOF 2018/07/30 11:37:20 http: TLS handshake error from 10.244.1.1:52252: EOF Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T11:32:37.706557Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:32:37.706634Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:32:37.722568Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:32:37.722660Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:32:37.780853Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:32:37.780940Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:32:37.789737Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixtqkg\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixtqkg" level=info timestamp=2018-07-30T11:32:37.809968Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:32:37.810062Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:32:53.691559Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:32:53.694270Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:32:55.021010Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:32:55.027218Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-30T11:32:55.058471Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:32:55.058660Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42bf0629-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: true" Pod name: virt-handler-7qnlr Pod phase: Running level=error timestamp=2018-07-30T11:33:18.552936Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmihrzfk kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T11:33:18.553222Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmihrzfk" level=info timestamp=2018-07-30T11:33:20.916721Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmihrzfk kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T11:33:20.917002Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmihrzfk, existing: false\n" level=info timestamp=2018-07-30T11:33:20.917080Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:33:20.917205Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmihrzfk kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:33:20.917939Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmihrzfk kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:33:20.918828Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmihrzfk, existing: false\n" level=info timestamp=2018-07-30T11:33:20.918947Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:33:20.919120Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmihrzfk kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:33:20.919307Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmihrzfk kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:33:28.793915Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmihrzfk, existing: false\n" level=info timestamp=2018-07-30T11:33:28.794202Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:33:28.794486Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmihrzfk kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:33:28.794758Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmihrzfk kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-db5hm Pod phase: Running level=info timestamp=2018-07-30T11:32:54.955882Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind=Domain uid=42c95ab3-93ec-11e8-a6c7-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T11:32:54.994322Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-30T11:32:54.997415Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42c95ab3-93ec-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:32:54.997513Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmixtqkg, existing: true\n" level=info timestamp=2018-07-30T11:32:54.997539Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-30T11:32:54.997569Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:32:54.997596Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:32:54.997657Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42c95ab3-93ec-11e8-a6c7-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T11:32:55.033998Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42c95ab3-93ec-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:32:55.035420Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmixtqkg, existing: true\n" level=info timestamp=2018-07-30T11:32:55.035501Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T11:32:55.037771Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:32:55.037852Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:32:55.038058Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42c95ab3-93ec-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:32:55.047476Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind= uid=42c95ab3-93ec-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmixtqkg-8lmmq Pod phase: Running 2018/07/30 07:37:23 read closing down: EOF • Failure [285.390 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 Timed out after 240.000s. No new VirtualMachineInstance instance showed up Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:429 ------------------------------ STEP: Creating new VMI, not running STEP: Starting the VirtualMachineInstance STEP: VMI has the running condition STEP: Getting the running VirtualMachineInstance STEP: Obtaining the serial console STEP: Guest shutdown STEP: waiting for the controller to replace the shut-down vmi with a new instance VM testvmixlrkh was scheduled to start • [SLOW TEST:19.636 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ VM testvmirph47 was scheduled to stop • [SLOW TEST:89.775 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running 2018/07/30 11:39:40 http: TLS handshake error from 10.244.1.1:52338: EOF 2018/07/30 11:39:50 http: TLS handshake error from 10.244.1.1:52344: EOF level=error timestamp=2018-07-30T11:39:58.402767Z pos=subresource.go:85 component=virt-api msg= 2018/07/30 11:39:58 http: response.WriteHeader on hijacked connection level=error timestamp=2018-07-30T11:39:58.404133Z pos=subresource.go:97 component=virt-api reason="read tcp 10.244.1.2:8443->10.244.0.0:44458: use of closed network connection" msg="error ecountered reading from websocket stream" level=info timestamp=2018-07-30T11:39:58.405126Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmiv4qf6/console proto=HTTP/1.1 statusCode=200 contentLength=0 level=info timestamp=2018-07-30T11:39:59.309134Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:40:01 http: TLS handshake error from 10.244.1.1:52350: EOF level=info timestamp=2018-07-30T11:40:01.921302Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:40:04.249062Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:40:10 http: TLS handshake error from 10.244.1.1:52356: EOF 2018/07/30 11:40:21 http: TLS handshake error from 10.244.1.1:52362: EOF level=info timestamp=2018-07-30T11:40:29.254009Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:40:30 http: TLS handshake error from 10.244.1.1:52368: EOF level=info timestamp=2018-07-30T11:40:32.114601Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T11:38:50.185880Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirph47 kind= uid=f88d1049-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:38:50.186238Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirph47 kind= uid=f88d1049-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-30T11:38:50.256054Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirph47 kind= uid=f88d1049-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:38:50.256327Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirph47 kind= uid=f88d1049-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-30T11:38:50.259007Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirph47 kind= uid=f88d1049-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:38:50.260070Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirph47 kind= uid=f88d1049-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-30T11:39:11.400028Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirph47 kind= uid=f88d1049-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:39:11.400415Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirph47 kind= uid=f88d1049-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-30T11:39:11.400501Z pos=vm.go:262 component=virt-controller service=http msg="vmi is nil" level=info timestamp=2018-07-30T11:39:11.424677Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirph47 kind= uid=f88d1049-93ec-11e8-a6c7-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-30T11:39:11.424905Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirph47 kind= uid=f88d1049-93ec-11e8-a6c7-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-30T11:39:11.425016Z pos=vm.go:262 component=virt-controller service=http msg="vmi is nil" level=info timestamp=2018-07-30T11:39:12.280807Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv4qf6 kind= uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:39:12.282388Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv4qf6 kind= uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:39:12.490334Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiv4qf6\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiv4qf6" Pod name: virt-handler-7qnlr Pod phase: Running level=info timestamp=2018-07-30T11:39:31.260357Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmiv4qf6 kind=Domain uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T11:39:31.311364Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-30T11:39:31.327129Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiv4qf6 kind= uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:39:31.327359Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiv4qf6, existing: true\n" level=info timestamp=2018-07-30T11:39:31.327416Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-30T11:39:31.327481Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:39:31.331859Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:39:31.332034Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmiv4qf6 kind= uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T11:39:31.406417Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiv4qf6 kind= uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:39:31.406748Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiv4qf6, existing: true\n" level=info timestamp=2018-07-30T11:39:31.406806Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T11:39:31.406872Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:39:31.406916Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:39:31.407094Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmiv4qf6 kind= uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:39:31.438115Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiv4qf6 kind= uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-db5hm Pod phase: Running level=error timestamp=2018-07-30T11:38:13.490872Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T11:38:13.490985Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmixtqkg" level=info timestamp=2018-07-30T11:38:21.965406Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T11:38:21.980380Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmixtqkg, existing: false\n" level=info timestamp=2018-07-30T11:38:21.980593Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:38:21.980844Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:38:21.981679Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:38:21.982784Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmixtqkg, existing: false\n" level=info timestamp=2018-07-30T11:38:21.982944Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:38:21.983212Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:38:21.983427Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:38:33.971842Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmixtqkg, existing: false\n" level=info timestamp=2018-07-30T11:38:33.972076Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:38:33.972620Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:38:33.972838Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixtqkg kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmiv4qf6-qr8s7 Pod phase: Running 2018/07/30 07:40:33 read closing down: EOF • Failure [81.362 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 Timed out after 40.010s. Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:85 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-07-30T11:39:13.040352Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmiv4qf6 kind=VirtualMachineInstance uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmiv4qf6-qr8s7" level=info timestamp=2018-07-30T11:39:29.045989Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmiv4qf6 kind=VirtualMachineInstance uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmiv4qf6-qr8s7" level=info timestamp=2018-07-30T11:39:31.117965Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmiv4qf6 kind=VirtualMachineInstance uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T11:39:31.174070Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmiv4qf6 kind=VirtualMachineInstance uid=2df6200e-93ed-11e8-a6c7-525500d15501 msg="VirtualMachineInstance started." STEP: Expecting the VirtualMachineInstance console STEP: Killing the watchdog device STEP: Checking that the VirtualMachineInstance has Failed status • ------------------------------ • [SLOW TEST:18.705 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:20.009 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:20.749 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.077 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.013 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.011 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.013 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1365 ------------------------------ volumedisk0 compute • [SLOW TEST:39.832 seconds] 2018/07/30 07:42:13 read closing down: EOF Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 ------------------------------ • ------------------------------ • [SLOW TEST:17.725 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:164 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.323 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:164 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:216 ------------------------------ •2018/07/30 07:44:13 read closing down: EOF ------------------------------ • [SLOW TEST:99.487 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:340 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:341 ------------------------------ • [SLOW TEST:109.439 seconds] Configurations 2018/07/30 07:46:03 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model equals to passthrough /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:368 should report exactly the same model as node CPU /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:369 ------------------------------ 2018/07/30 07:47:54 read closing down: EOF • [SLOW TEST:110.883 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:392 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:393 ------------------------------ • [SLOW TEST:41.611 seconds] 2018/07/30 07:48:35 read closing down: EOF Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:413 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:436 ------------------------------ 2018/07/30 07:49:19 read closing down: EOF 2018/07/30 07:49:29 read closing down: EOF 2018/07/30 07:49:40 read closing down: EOF 2018/07/30 07:49:50 read closing down: EOF 2018/07/30 07:49:51 read closing down: EOF 2018/07/30 07:49:52 read closing down: EOF 2018/07/30 07:49:53 read closing down: EOF • [SLOW TEST:78.095 seconds] Networking 2018/07/30 07:49:53 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/30 07:49:55 read closing down: EOF •2018/07/30 07:49:56 read closing down: EOF 2018/07/30 07:49:56 read closing down: EOF 2018/07/30 07:49:58 read closing down: EOF •2018/07/30 07:49:59 read closing down: EOF 2018/07/30 07:49:59 read closing down: EOF 2018/07/30 07:50:00 read closing down: EOF 2018/07/30 07:50:01 read closing down: EOF •2018/07/30 07:50:01 read closing down: EOF ------------------------------ • [SLOW TEST:5.201 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •••••• Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running level=info timestamp=2018-07-30T11:52:29.309838Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:52:30 http: TLS handshake error from 10.244.1.1:52852: EOF level=info timestamp=2018-07-30T11:52:36.345176Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:52:36.610015Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:52:36.613704Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:52:39.266818Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:52:41 http: TLS handshake error from 10.244.1.1:52858: EOF 2018/07/30 11:52:50 http: TLS handshake error from 10.244.1.1:52864: EOF level=info timestamp=2018-07-30T11:52:59.238983Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:53:00 http: TLS handshake error from 10.244.1.1:52870: EOF level=info timestamp=2018-07-30T11:53:06.452367Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:53:09.379931Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:53:10 http: TLS handshake error from 10.244.1.1:52876: EOF 2018/07/30 11:53:20 http: TLS handshake error from 10.244.1.1:52882: EOF level=info timestamp=2018-07-30T11:53:29.387733Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T11:47:54.323707Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijw48m kind= uid=65220a53-93ee-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:47:54.325999Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijw48m kind= uid=65220a53-93ee-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:48:35.616535Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiw2f9w kind= uid=7dbdad62-93ee-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:48:35.622774Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiw2f9w kind= uid=7dbdad62-93ee-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:48:35.631573Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:48:35.632982Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:48:35.665597Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5r6ck kind= uid=7dc5620e-93ee-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:48:35.665715Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5r6ck kind= uid=7dc5620e-93ee-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:48:35.781311Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwrdpv kind= uid=7dc86ee6-93ee-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:48:35.782279Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwrdpv kind= uid=7dc86ee6-93ee-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:48:36.019653Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiw2f9w\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiw2f9w" level=info timestamp=2018-07-30T11:48:36.437610Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwrdpv\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwrdpv" level=info timestamp=2018-07-30T11:50:29.333820Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifc5gh kind= uid=c17fa7c3-93ee-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:50:29.337831Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifc5gh kind= uid=c17fa7c3-93ee-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:50:29.532361Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmifc5gh\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmifc5gh" Pod name: virt-handler-7qnlr Pod phase: Running level=error timestamp=2018-07-30T11:49:26.957648Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmijw48m kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T11:49:26.957812Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmijw48m" level=info timestamp=2018-07-30T11:49:35.916684Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmijw48m kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T11:49:35.917117Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijw48m, existing: false\n" level=info timestamp=2018-07-30T11:49:35.917192Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:49:35.917325Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijw48m kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:49:35.918253Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijw48m kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:49:35.919147Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijw48m, existing: false\n" level=info timestamp=2018-07-30T11:49:35.919243Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:49:35.919364Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijw48m kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:49:35.919698Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijw48m kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:49:47.438616Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijw48m, existing: false\n" level=info timestamp=2018-07-30T11:49:47.438818Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:49:47.439034Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijw48m kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:49:47.439229Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijw48m kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-db5hm Pod phase: Running level=info timestamp=2018-07-30T11:48:53.388807Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:48:53.388851Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:48:53.389027Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmiw2f9w kind= uid=7dbdad62-93ee-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:48:53.422591Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiw2f9w kind= uid=7dbdad62-93ee-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:48:54.861967Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:48:54.862063Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmih5fc6, existing: true\n" level=info timestamp=2018-07-30T11:48:54.862084Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T11:48:54.961606Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:48:54.961631Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:48:54.961729Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:48:54.995013Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:49:05.677340Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmip8x7w, existing: false\n" level=info timestamp=2018-07-30T11:49:05.691542Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:49:05.691840Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmip8x7w kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:49:05.692017Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmip8x7w kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: netcat2fq9h Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcat5d2cb Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcat6w9hd Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcat8l4fw Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcat8nkcb Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcat9wg8j Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatn8k6h Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi5r6ck-xzgw6 Pod phase: Running Pod name: virt-launcher-testvmifc5gh-dpnnd Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 7 [running]: io.copyBuffer(0x142d000, 0xc4200b4008, 0x0, 0x0, 0xc421928000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4008, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func1(0xc4200e46e0, 0xc42008c240) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:264 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:261 +0x15f Pod name: virt-launcher-testvmih5fc6-2tdp6 Pod phase: Running Pod name: virt-launcher-testvmiw2f9w-bw5pl Pod phase: Running Pod name: virt-launcher-testvmiwrdpv-8mm9q Pod phase: Running ------------------------------ • Failure [182.354 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:368 should expose the right device type to the guest [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:369 Timed out after 90.012s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1046 ------------------------------ STEP: checking the device vendor in /sys/class level=info timestamp=2018-07-30T11:50:30.168522Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmifc5gh kind=VirtualMachineInstance uid=c17fa7c3-93ee-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmifc5gh-dpnnd" 2018/07/30 07:53:32 read closing down: EOF •2018/07/30 07:53:33 read closing down: EOF 2018/07/30 07:54:05 read closing down: EOF 2018/07/30 07:54:06 read closing down: EOF ------------------------------ • [SLOW TEST:33.607 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:402 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:403 ------------------------------ Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running level=info timestamp=2018-07-30T11:56:10.165260Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:56:10 http: TLS handshake error from 10.244.1.1:52992: EOF 2018/07/30 11:56:20 http: TLS handshake error from 10.244.1.1:52998: EOF level=info timestamp=2018-07-30T11:56:29.137261Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:56:29.160677Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:56:29.343104Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:56:30 http: TLS handshake error from 10.244.1.1:53004: EOF level=info timestamp=2018-07-30T11:56:36.781915Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:56:36.785861Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:56:37.598696Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:56:40.279798Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:56:40 http: TLS handshake error from 10.244.1.1:53010: EOF 2018/07/30 11:56:50 http: TLS handshake error from 10.244.1.1:53016: EOF level=info timestamp=2018-07-30T11:56:59.372887Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:57:00 http: TLS handshake error from 10.244.1.1:53022: EOF Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T11:48:35.665597Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5r6ck kind= uid=7dc5620e-93ee-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:48:35.665715Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5r6ck kind= uid=7dc5620e-93ee-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:48:35.781311Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwrdpv kind= uid=7dc86ee6-93ee-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:48:35.782279Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwrdpv kind= uid=7dc86ee6-93ee-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:48:36.019653Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiw2f9w\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiw2f9w" level=info timestamp=2018-07-30T11:48:36.437610Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwrdpv\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwrdpv" level=info timestamp=2018-07-30T11:50:29.333820Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifc5gh kind= uid=c17fa7c3-93ee-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:50:29.337831Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifc5gh kind= uid=c17fa7c3-93ee-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:50:29.532361Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmifc5gh\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmifc5gh" level=info timestamp=2018-07-30T11:53:32.863459Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:53:32.864744Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:53:32.963364Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmifgkbv\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmifgkbv" level=info timestamp=2018-07-30T11:54:06.486143Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqpnf8 kind= uid=42f4194c-93ef-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:54:06.486794Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqpnf8 kind= uid=42f4194c-93ef-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:54:06.682437Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqpnf8\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqpnf8" Pod name: virt-handler-7qnlr Pod phase: Running level=info timestamp=2018-07-30T11:53:50.683754Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind=Domain uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T11:53:50.761005Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-30T11:53:50.762650Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:53:50.762807Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmifgkbv, existing: true\n" level=info timestamp=2018-07-30T11:53:50.762837Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-30T11:53:50.762865Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:53:50.762891Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:53:50.762951Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T11:53:50.815753Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:53:50.815967Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmifgkbv, existing: true\n" level=info timestamp=2018-07-30T11:53:50.816024Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T11:53:50.816091Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:53:50.816135Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:53:50.816375Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:53:50.832293Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-db5hm Pod phase: Running level=info timestamp=2018-07-30T11:48:53.388807Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:48:53.388851Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:48:53.389027Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmiw2f9w kind= uid=7dbdad62-93ee-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:48:53.422591Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiw2f9w kind= uid=7dbdad62-93ee-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:48:54.861967Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:48:54.862063Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmih5fc6, existing: true\n" level=info timestamp=2018-07-30T11:48:54.862084Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T11:48:54.961606Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:48:54.961631Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:48:54.961729Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:48:54.995013Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:49:05.677340Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmip8x7w, existing: false\n" level=info timestamp=2018-07-30T11:49:05.691542Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:49:05.691840Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmip8x7w kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:49:05.692017Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmip8x7w kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: netcat2fq9h Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcat5d2cb Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcat6w9hd Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcat8l4fw Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcat8nkcb Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcat9wg8j Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatn8k6h Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi5r6ck-xzgw6 Pod phase: Running Pod name: virt-launcher-testvmifc5gh-dpnnd Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 7 [running]: io.copyBuffer(0x142d000, 0xc4200b4008, 0x0, 0x0, 0xc421928000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4008, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func1(0xc4200e46e0, 0xc42008c240) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:264 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:261 +0x15f Pod name: virt-launcher-testvmifgkbv-khskh Pod phase: Running Pod name: virt-launcher-testvmih5fc6-2tdp6 Pod phase: Running Pod name: virt-launcher-testvmiqpnf8-n7m22 Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 23 [running]: io.copyBuffer(0x142d000, 0xc4200b4010, 0x0, 0x0, 0xc42194e000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4010, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func2(0xc420300420, 0xc42008c240) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:272 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:269 +0x191 Pod name: virt-launcher-testvmiw2f9w-bw5pl Pod phase: Running Pod name: virt-launcher-testvmiwrdpv-8mm9q Pod phase: Running • Failure [182.708 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address in non-conventional format /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:414 should configure custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:415 Timed out after 90.011s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1046 ------------------------------ STEP: checking eth0 MAC address level=info timestamp=2018-07-30T11:54:07.152756Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmiqpnf8 kind=VirtualMachineInstance uid=42f4194c-93ef-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmiqpnf8-n7m22" Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running level=info timestamp=2018-07-30T11:59:08.228739Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:59:10.837122Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:59:10 http: TLS handshake error from 10.244.1.1:53100: EOF 2018/07/30 11:59:20 http: TLS handshake error from 10.244.1.1:53106: EOF level=info timestamp=2018-07-30T11:59:29.386563Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 11:59:30 http: TLS handshake error from 10.244.1.1:53112: EOF level=info timestamp=2018-07-30T11:59:36.811753Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:59:36.816416Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T11:59:38.365953Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T11:59:40.949063Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 11:59:40 http: TLS handshake error from 10.244.1.1:53118: EOF 2018/07/30 11:59:50 http: TLS handshake error from 10.244.1.1:53124: EOF level=info timestamp=2018-07-30T11:59:59.303531Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:00:00 http: TLS handshake error from 10.244.1.1:53130: EOF level=info timestamp=2018-07-30T12:00:08.502672Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T11:48:36.019653Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiw2f9w\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiw2f9w" level=info timestamp=2018-07-30T11:48:36.437610Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwrdpv\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwrdpv" level=info timestamp=2018-07-30T11:50:29.333820Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifc5gh kind= uid=c17fa7c3-93ee-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:50:29.337831Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifc5gh kind= uid=c17fa7c3-93ee-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:50:29.532361Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmifc5gh\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmifc5gh" level=info timestamp=2018-07-30T11:53:32.863459Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:53:32.864744Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:53:32.963364Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmifgkbv\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmifgkbv" level=info timestamp=2018-07-30T11:54:06.486143Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqpnf8 kind= uid=42f4194c-93ef-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:54:06.486794Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqpnf8 kind= uid=42f4194c-93ef-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:54:06.682437Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqpnf8\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqpnf8" level=info timestamp=2018-07-30T11:57:09.193626Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikkj25 kind= uid=afdbd192-93ef-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T11:57:09.193901Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikkj25 kind= uid=afdbd192-93ef-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T11:57:09.363193Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikkj25\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikkj25" level=info timestamp=2018-07-30T11:57:09.378362Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikkj25\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikkj25" Pod name: virt-handler-7qnlr Pod phase: Running level=info timestamp=2018-07-30T11:53:50.683754Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind=Domain uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T11:53:50.761005Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-30T11:53:50.762650Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:53:50.762807Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmifgkbv, existing: true\n" level=info timestamp=2018-07-30T11:53:50.762837Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-30T11:53:50.762865Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:53:50.762891Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:53:50.762951Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T11:53:50.815753Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:53:50.815967Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmifgkbv, existing: true\n" level=info timestamp=2018-07-30T11:53:50.816024Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T11:53:50.816091Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:53:50.816135Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:53:50.816375Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:53:50.832293Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmifgkbv kind= uid=2eebdd20-93ef-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-db5hm Pod phase: Running level=info timestamp=2018-07-30T11:48:53.388807Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:48:53.388851Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:48:53.389027Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmiw2f9w kind= uid=7dbdad62-93ee-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:48:53.422591Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiw2f9w kind= uid=7dbdad62-93ee-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:48:54.861967Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:48:54.862063Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmih5fc6, existing: true\n" level=info timestamp=2018-07-30T11:48:54.862084Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T11:48:54.961606Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T11:48:54.961631Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T11:48:54.961729Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T11:48:54.995013Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmih5fc6 kind= uid=7dc26cdb-93ee-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T11:49:05.677340Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmip8x7w, existing: false\n" level=info timestamp=2018-07-30T11:49:05.691542Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T11:49:05.691840Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmip8x7w kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T11:49:05.692017Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmip8x7w kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: netcat2fq9h Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcat5d2cb Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcat6w9hd Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcat8l4fw Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcat8nkcb Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcat9wg8j Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatn8k6h Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.0.25 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi5r6ck-xzgw6 Pod phase: Running Pod name: virt-launcher-testvmifc5gh-dpnnd Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 7 [running]: io.copyBuffer(0x142d000, 0xc4200b4008, 0x0, 0x0, 0xc421928000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4008, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func1(0xc4200e46e0, 0xc42008c240) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:264 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:261 +0x15f Pod name: virt-launcher-testvmifgkbv-khskh Pod phase: Running Pod name: virt-launcher-testvmih5fc6-2tdp6 Pod phase: Running Pod name: virt-launcher-testvmikkj25-j7bqk Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 21 [running]: io.copyBuffer(0x142d000, 0xc4200b4008, 0x0, 0x0, 0xc421944000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4008, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func1(0xc4201ce160, 0xc42008c240) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:264 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:261 +0x15f Pod name: virt-launcher-testvmiqpnf8-n7m22 Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 23 [running]: io.copyBuffer(0x142d000, 0xc4200b4010, 0x0, 0x0, 0xc42194e000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4010, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func2(0xc420300420, 0xc42008c240) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:272 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:269 +0x191 Pod name: virt-launcher-testvmiw2f9w-bw5pl Pod phase: Running Pod name: virt-launcher-testvmiwrdpv-8mm9q Pod phase: Running • Failure [182.907 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address and slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:427 should configure custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:428 Timed out after 90.016s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1046 ------------------------------ STEP: checking eth0 MAC address level=info timestamp=2018-07-30T11:57:09.905347Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmikkj25 kind=VirtualMachineInstance uid=afdbd192-93ef-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmikkj25-j7bqk" 2018/07/30 08:00:58 read closing down: EOF • [SLOW TEST:46.794 seconds] 2018/07/30 08:00:59 read closing down: EOF Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with disabled automatic attachment of interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:440 should not configure any external interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:441 ------------------------------ • [SLOW TEST:21.241 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ ••• ------------------------------ • [SLOW TEST:180.337 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:19.552 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:22.285 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • [SLOW TEST:15.629 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.311 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.385 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.192 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running 2018/07/30 12:07:50 http: TLS handshake error from 10.244.1.1:53422: EOF level=info timestamp=2018-07-30T12:07:59.399762Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:08:00 http: TLS handshake error from 10.244.1.1:53428: EOF level=info timestamp=2018-07-30T12:08:10.636292Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:08:10 http: TLS handshake error from 10.244.1.1:53434: EOF level=info timestamp=2018-07-30T12:08:13.070368Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:08:20 http: TLS handshake error from 10.244.1.1:53440: EOF level=info timestamp=2018-07-30T12:08:29.380401Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:08:30 http: TLS handshake error from 10.244.1.1:53446: EOF level=info timestamp=2018-07-30T12:08:40.767065Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:08:40 http: TLS handshake error from 10.244.1.1:53452: EOF level=info timestamp=2018-07-30T12:08:43.165326Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:08:50 http: TLS handshake error from 10.244.1.1:53458: EOF level=info timestamp=2018-07-30T12:08:59.314264Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:09:00 http: TLS handshake error from 10.244.1.1:53464: EOF Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T12:04:40.915292Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminbswn kind= uid=bd1ab28e-93f0-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:04:40.915641Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminbswn kind= uid=bd1ab28e-93f0-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:04:40.950598Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirp9v9 kind= uid=bd207380-93f0-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:04:40.950845Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirp9v9 kind= uid=bd207380-93f0-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:04:40.975687Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmik94dk kind= uid=bd23aadc-93f0-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:04:40.975850Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmik94dk kind= uid=bd23aadc-93f0-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:04:41.095585Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwlnct\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwlnct" level=info timestamp=2018-07-30T12:04:41.176436Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirqnb4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirqnb4" level=info timestamp=2018-07-30T12:04:42.476570Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmik94dk\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmik94dk" level=info timestamp=2018-07-30T12:06:01.654969Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqgdjb kind= uid=ed39097b-93f0-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:06:01.658290Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqgdjb kind= uid=ed39097b-93f0-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:06:01.920559Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqgdjb\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiqgdjb, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ed39097b-93f0-11e8-a6c7-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqgdjb" level=info timestamp=2018-07-30T12:06:02.016454Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqf6v2 kind= uid=ed746e91-93f0-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:06:02.016572Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqf6v2 kind= uid=ed746e91-93f0-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:06:02.105058Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqf6v2\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqf6v2" Pod name: virt-handler-7qnlr Pod phase: Running level=info timestamp=2018-07-30T12:05:50.919892Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:50.920012Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirp9v9 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:50.920162Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirp9v9 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:50.920241Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiwlnct, existing: false\n" level=info timestamp=2018-07-30T12:05:50.920283Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:50.920352Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwlnct kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:50.920565Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiwlnct kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:54.503975Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmik94dk, existing: false\n" level=info timestamp=2018-07-30T12:05:54.504146Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:54.504330Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmik94dk kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:54.504639Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmik94dk kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:54.569266Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiwlnct, existing: false\n" level=info timestamp=2018-07-30T12:05:54.569429Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:54.569680Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwlnct kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:54.569973Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiwlnct kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-db5hm Pod phase: Running level=info timestamp=2018-07-30T12:05:51.968073Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:51.968241Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:51.968409Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:51.968521Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmirqnb4, existing: false\n" level=info timestamp=2018-07-30T12:05:51.968636Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:51.968730Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:51.968919Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:53.781646Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmirqnb4, existing: false\n" level=info timestamp=2018-07-30T12:05:53.781840Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:53.782024Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:53.784473Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:53.821885Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvminbswn, existing: false\n" level=info timestamp=2018-07-30T12:05:53.822074Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:53.822650Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:53.822855Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmiqf6v2-jvrll Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 8 [running]: io.copyBuffer(0x142d000, 0xc4200b4008, 0x0, 0x0, 0xc421938000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4008, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func1(0xc4200e4840, 0xc4200c4540) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:264 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:261 +0x15f ------------------------------ • Failure [180.752 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 should start it [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:80 Timed out after 90.011s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1046 ------------------------------ level=info timestamp=2018-07-30T12:06:02.706014Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmiqf6v2 kind=VirtualMachineInstance uid=ed746e91-93f0-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmiqf6v2-jvrll" Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running 2018/07/30 12:08:20 http: TLS handshake error from 10.244.1.1:53440: EOF level=info timestamp=2018-07-30T12:08:29.380401Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:08:30 http: TLS handshake error from 10.244.1.1:53446: EOF level=info timestamp=2018-07-30T12:08:40.767065Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:08:40 http: TLS handshake error from 10.244.1.1:53452: EOF level=info timestamp=2018-07-30T12:08:43.165326Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:08:50 http: TLS handshake error from 10.244.1.1:53458: EOF level=info timestamp=2018-07-30T12:08:59.314264Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:09:00 http: TLS handshake error from 10.244.1.1:53464: EOF level=info timestamp=2018-07-30T12:09:10.908610Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:09:10 http: TLS handshake error from 10.244.1.1:53470: EOF level=info timestamp=2018-07-30T12:09:13.308881Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:09:20 http: TLS handshake error from 10.244.1.1:53476: EOF level=info timestamp=2018-07-30T12:09:29.334358Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:09:30 http: TLS handshake error from 10.244.1.1:53482: EOF Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T12:04:40.950845Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirp9v9 kind= uid=bd207380-93f0-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:04:40.975687Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmik94dk kind= uid=bd23aadc-93f0-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:04:40.975850Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmik94dk kind= uid=bd23aadc-93f0-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:04:41.095585Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwlnct\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwlnct" level=info timestamp=2018-07-30T12:04:41.176436Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirqnb4\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirqnb4" level=info timestamp=2018-07-30T12:04:42.476570Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmik94dk\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmik94dk" level=info timestamp=2018-07-30T12:06:01.654969Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqgdjb kind= uid=ed39097b-93f0-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:06:01.658290Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqgdjb kind= uid=ed39097b-93f0-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:06:01.920559Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqgdjb\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiqgdjb, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ed39097b-93f0-11e8-a6c7-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqgdjb" level=info timestamp=2018-07-30T12:06:02.016454Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqf6v2 kind= uid=ed746e91-93f0-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:06:02.016572Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqf6v2 kind= uid=ed746e91-93f0-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:06:02.105058Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqf6v2\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqf6v2" level=info timestamp=2018-07-30T12:09:02.632429Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqf6v2\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiqf6v2, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ed746e91-93f0-11e8-a6c7-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqf6v2" level=info timestamp=2018-07-30T12:09:02.793664Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihbszl kind= uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:09:02.794579Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihbszl kind= uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-handler-7qnlr Pod phase: Running level=info timestamp=2018-07-30T12:09:19.947821Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmihbszl kind=Domain uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T12:09:20.008328Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-30T12:09:20.014208Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmihbszl kind= uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:09:20.014480Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmihbszl, existing: true\n" level=info timestamp=2018-07-30T12:09:20.014631Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-30T12:09:20.014712Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:09:20.014774Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:09:20.014937Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmihbszl kind= uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T12:09:20.075413Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmihbszl kind= uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:09:20.075766Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmihbszl, existing: true\n" level=info timestamp=2018-07-30T12:09:20.075869Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T12:09:20.075939Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:09:20.076004Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:09:20.076201Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmihbszl kind= uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T12:09:20.105465Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmihbszl kind= uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-db5hm Pod phase: Running level=info timestamp=2018-07-30T12:05:51.968073Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:51.968241Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:51.968409Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:51.968521Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmirqnb4, existing: false\n" level=info timestamp=2018-07-30T12:05:51.968636Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:51.968730Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:51.968919Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:53.781646Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmirqnb4, existing: false\n" level=info timestamp=2018-07-30T12:05:53.781840Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:53.782024Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:53.784473Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:53.821885Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvminbswn, existing: false\n" level=info timestamp=2018-07-30T12:05:53.822074Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:53.822650Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:53.822855Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmihbszl-rmjlj Pod phase: Running • Failure [28.302 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 should attach virt-launcher to it [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:86 Timed out after 11.000s. Expected : to contain substring : Found PID for /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:96 ------------------------------ level=info timestamp=2018-07-30T12:09:03.594066Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmihbszl kind=VirtualMachineInstance uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmihbszl-rmjlj" level=info timestamp=2018-07-30T12:09:18.031859Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmihbszl kind=VirtualMachineInstance uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmihbszl-rmjlj" level=info timestamp=2018-07-30T12:09:19.761996Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmihbszl kind=VirtualMachineInstance uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:09:19.803729Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmihbszl kind=VirtualMachineInstance uid=5931e5b6-93f1-11e8-a6c7-525500d15501 msg="VirtualMachineInstance started." STEP: Getting virt-launcher logs ••••2018/07/30 08:10:12 read closing down: EOF ------------------------------ • [SLOW TEST:39.839 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:174 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/30 08:10:41 read closing down: EOF • [SLOW TEST:29.344 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:174 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:60.414 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:205 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:206 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:207 ------------------------------ • [SLOW TEST:18.183 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:205 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:206 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:237 ------------------------------ • [SLOW TEST:50.582 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:285 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:286 ------------------------------ Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running 2018/07/30 12:13:00 http: TLS handshake error from 10.244.1.1:53612: EOF 2018/07/30 12:13:10 http: TLS handshake error from 10.244.1.1:53618: EOF level=info timestamp=2018-07-30T12:13:11.974254Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:13:14.235341Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:13:20 http: TLS handshake error from 10.244.1.1:53626: EOF level=info timestamp=2018-07-30T12:13:29.284072Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:13:30 http: TLS handshake error from 10.244.1.1:53632: EOF 2018/07/30 12:13:40 http: TLS handshake error from 10.244.1.1:53638: EOF level=info timestamp=2018-07-30T12:13:42.100739Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:13:44.355335Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:13:50 http: TLS handshake error from 10.244.1.1:53644: EOF level=info timestamp=2018-07-30T12:13:59.287984Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:14:00 http: TLS handshake error from 10.244.1.1:53650: EOF 2018/07/30 12:14:10 http: TLS handshake error from 10.244.1.1:53656: EOF level=info timestamp=2018-07-30T12:14:12.233109Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T12:10:12.090203Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitb7rv kind= uid=827ffd55-93f1-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:10:12.090653Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitb7rv kind= uid=827ffd55-93f1-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:10:12.184141Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitb7rv\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitb7rv" level=info timestamp=2018-07-30T12:10:41.433357Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis47lh kind= uid=94000c0c-93f1-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:10:41.433618Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis47lh kind= uid=94000c0c-93f1-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:11:41.703512Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmis47lh\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmis47lh, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 94000c0c-93f1-11e8-a6c7-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmis47lh" level=info timestamp=2018-07-30T12:11:41.890902Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwdv8m kind= uid=b805f156-93f1-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:11:41.891228Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwdv8m kind= uid=b805f156-93f1-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:11:42.070015Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwdv8m\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwdv8m" level=info timestamp=2018-07-30T12:12:00.020556Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikbps7 kind= uid=c2d68de9-93f1-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:12:00.024533Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikbps7 kind= uid=c2d68de9-93f1-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:12:00.163775Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikbps7\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikbps7" level=info timestamp=2018-07-30T12:12:00.234950Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikbps7\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikbps7" level=info timestamp=2018-07-30T12:12:50.719522Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidlgfg kind= uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:12:50.719946Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidlgfg kind= uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-handler-7qnlr Pod phase: Running level=info timestamp=2018-07-30T12:13:13.485359Z pos=virt-handler.go:87 component=virt-handler hostname=node02 level=info timestamp=2018-07-30T12:13:13.489962Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-30T12:13:13.490692Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-30T12:13:13.490807Z pos=cache.go:121 component=virt-handler msg="List domains from sock /var/run/kubevirt/sockets/kubevirt-test-default_testvmidlgfg_sock" level=info timestamp=2018-07-30T12:13:13.526569Z pos=vm.go:725 component=virt-handler namespace=kubevirt-test-default name=testvmidlgfg kind=Domain uid= msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T12:13:13.591748Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-30T12:13:13.611000Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-30T12:13:13.613248Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" level=info timestamp=2018-07-30T12:13:13.691947Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmidlgfg, existing: true\n" level=info timestamp=2018-07-30T12:13:13.692018Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T12:13:13.692048Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:13:13.692068Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:13:13.692142Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T12:13:13.824969Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-db5hm Pod phase: Running level=info timestamp=2018-07-30T12:05:51.968073Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:51.968241Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:51.968409Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:51.968521Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmirqnb4, existing: false\n" level=info timestamp=2018-07-30T12:05:51.968636Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:51.968730Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:51.968919Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:53.781646Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmirqnb4, existing: false\n" level=info timestamp=2018-07-30T12:05:53.781840Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:53.782024Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:53.784473Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirqnb4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:05:53.821885Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvminbswn, existing: false\n" level=info timestamp=2018-07-30T12:05:53.822074Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:05:53.822650Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:05:53.822855Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvminbswn kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmidlgfg-b6ncm Pod phase: Running Pod name: vmi-killer2ljrt Pod phase: Succeeded Pod name: vmi-killerxtsdc Pod phase: Succeeded • Failure [82.571 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:309 should recover and continue management [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:310 Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:336 ------------------------------ level=info timestamp=2018-07-30T12:12:51.315402Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmidlgfg-b6ncm" level=info timestamp=2018-07-30T12:13:06.984449Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmidlgfg-b6ncm" level=info timestamp=2018-07-30T12:13:09.102389Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:13:09.134666Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="VirtualMachineInstance started." STEP: Crashing the virt-handler STEP: Killing the VirtualMachineInstance level=info timestamp=2018-07-30T12:13:12.959744Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmidlgfg-b6ncm" level=info timestamp=2018-07-30T12:13:12.959889Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmidlgfg-b6ncm" level=info timestamp=2018-07-30T12:13:12.960431Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:13:12.960634Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="VirtualMachineInstance started." level=info timestamp=2018-07-30T12:13:13.601231Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmidlgfg kind=VirtualMachineInstance uid=e10e7c8e-93f1-11e8-a6c7-525500d15501 msg="VirtualMachineInstance defined." STEP: Checking that VirtualMachineInstance has 'Failed' phase • [SLOW TEST:6.541 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:340 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:341 ------------------------------ • [SLOW TEST:130.555 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:371 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:410 ------------------------------ • [SLOW TEST:19.573 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with node tainted /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:463 the vmi with tolerations should be scheduled /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:485 ------------------------------ • ------------------------------ • [SLOW TEST:64.503 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:74.131 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.160 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:604 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.112 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:641 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.170 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:685 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ •••• ------------------------------ • [SLOW TEST:62.556 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:837 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:838 ------------------------------ • [SLOW TEST:49.944 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:870 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:871 ------------------------------ 2018/07/30 08:21:47 read closing down: EOF Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running level=info timestamp=2018-07-30T12:21:16.266008Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:21:20 http: TLS handshake error from 10.244.1.1:53918: EOF level=info timestamp=2018-07-30T12:21:23.129396Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" level=info timestamp=2018-07-30T12:21:29.494980Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:21:30 http: TLS handshake error from 10.244.1.1:53926: EOF level=info timestamp=2018-07-30T12:21:37.298851Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:21:37.305360Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:21:40 http: TLS handshake error from 10.244.1.1:53932: EOF level=info timestamp=2018-07-30T12:21:44.129574Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:21:46.410213Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=error timestamp=2018-07-30T12:21:48.611584Z pos=subresource.go:85 component=virt-api msg= 2018/07/30 12:21:48 http: response.WriteHeader on hijacked connection level=error timestamp=2018-07-30T12:21:48.612239Z pos=subresource.go:97 component=virt-api reason="read tcp 10.244.1.2:8443->10.244.0.0:50258: use of closed network connection" msg="error ecountered reading from websocket stream" level=info timestamp=2018-07-30T12:21:48.612461Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvminzq8r/console proto=HTTP/1.1 statusCode=200 contentLength=0 2018/07/30 12:21:50 http: TLS handshake error from 10.244.1.1:53938: EOF Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T12:17:55.710758Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-alternative name=testvmi4cwl6 kind= uid=96d8cba6-93f2-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:17:55.712859Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-alternative name=testvmi4cwl6 kind= uid=96d8cba6-93f2-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:19:10.241049Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4rd7v kind= uid=c3429e1c-93f2-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:19:10.244407Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4rd7v kind= uid=c3429e1c-93f2-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:19:10.334791Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi4rd7v\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi4rd7v" level=info timestamp=2018-07-30T12:19:11.189412Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi77xml kind= uid=c3d253ed-93f2-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:19:11.189699Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi77xml kind= uid=c3d253ed-93f2-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:19:12.007815Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiq5rdr kind= uid=c45366d1-93f2-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:19:12.007948Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiq5rdr kind= uid=c45366d1-93f2-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:19:12.174801Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiq5rdr\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiq5rdr" level=info timestamp=2018-07-30T12:20:14.702889Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisgp5d kind= uid=e9aca79a-93f2-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:20:14.703674Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisgp5d kind= uid=e9aca79a-93f2-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:21:04.690516Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminzq8r kind= uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:21:04.690752Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminzq8r kind= uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:21:04.796245Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminzq8r\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminzq8r" Pod name: virt-handler-7gm44 Pod phase: Running level=info timestamp=2018-07-30T12:19:09.254509Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind= uid=96d8cba6-93f2-11e8-a6c7-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:19:09.259359Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind= uid=96d8cba6-93f2-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:19:09.261015Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4cwl6, existing: true\n" level=info timestamp=2018-07-30T12:19:09.261741Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-07-30T12:19:09.261856Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:19:09.262022Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind= uid=96d8cba6-93f2-11e8-a6c7-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:19:09.262954Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind= uid=96d8cba6-93f2-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:19:09.309879Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4cwl6, existing: false\n" level=info timestamp=2018-07-30T12:19:09.310030Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:19:09.311840Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:19:09.313213Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:19:24.008027Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4cwl6, existing: false\n" level=info timestamp=2018-07-30T12:19:24.009219Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:19:24.009922Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:19:24.010752Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nnzxj Pod phase: Running level=info timestamp=2018-07-30T12:21:29.613952Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmisgp5d, existing: false\n" level=info timestamp=2018-07-30T12:21:29.614026Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:21:29.614142Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmisgp5d kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:21:29.614231Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisgp5d kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:21:41.507718Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmisgp5d, existing: false\n" level=info timestamp=2018-07-30T12:21:41.508137Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:21:41.508379Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmisgp5d kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:21:41.508614Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisgp5d kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:21:48.051966Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvminzq8r, existing: true\n" level=info timestamp=2018-07-30T12:21:48.052128Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T12:21:48.052183Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:21:48.052227Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:21:48.052376Z pos=vm.go:370 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind= uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-30T12:21:48.052432Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind= uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-30T12:21:48.055399Z pos=vm.go:556 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind= uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Grace period expired, killing deleted VirtualMachineInstance testvminzq8r" Pod name: virt-launcher-testvminzq8r-mffmm Pod phase: Running • Failure [48.312 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with ACPI and 0 grace period seconds /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:895 should result in vmi status failed [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:896 Timed out after 5.000s. Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:917 ------------------------------ STEP: Creating the VirtualMachineInstance level=info timestamp=2018-07-30T12:21:05.279163Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvminzq8r-mffmm" level=info timestamp=2018-07-30T12:21:20.593543Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvminzq8r-mffmm" level=info timestamp=2018-07-30T12:21:22.542305Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:21:22.570018Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="VirtualMachineInstance started." STEP: Deleting the VirtualMachineInstance STEP: Verifying VirtualMachineInstance's status is Failed Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running 2018/07/30 12:23:50 http: TLS handshake error from 10.244.1.1:54010: EOF level=info timestamp=2018-07-30T12:23:59.388457Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:24:00 http: TLS handshake error from 10.244.1.1:54016: EOF 2018/07/30 12:24:10 http: TLS handshake error from 10.244.1.1:54022: EOF level=info timestamp=2018-07-30T12:24:14.826744Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:24:16.972386Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:24:20 http: TLS handshake error from 10.244.1.1:54028: EOF level=info timestamp=2018-07-30T12:24:29.299091Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:24:30 http: TLS handshake error from 10.244.1.1:54034: EOF level=info timestamp=2018-07-30T12:24:37.323791Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:24:37.327845Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:24:40 http: TLS handshake error from 10.244.1.1:54040: EOF level=info timestamp=2018-07-30T12:24:44.953472Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:24:47.086115Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:24:50 http: TLS handshake error from 10.244.1.1:54046: EOF Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T12:19:10.334791Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi4rd7v\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi4rd7v" level=info timestamp=2018-07-30T12:19:11.189412Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi77xml kind= uid=c3d253ed-93f2-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:19:11.189699Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi77xml kind= uid=c3d253ed-93f2-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:19:12.007815Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiq5rdr kind= uid=c45366d1-93f2-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:19:12.007948Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiq5rdr kind= uid=c45366d1-93f2-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:19:12.174801Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiq5rdr\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiq5rdr" level=info timestamp=2018-07-30T12:20:14.702889Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisgp5d kind= uid=e9aca79a-93f2-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:20:14.703674Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisgp5d kind= uid=e9aca79a-93f2-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:21:04.690516Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminzq8r kind= uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:21:04.690752Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminzq8r kind= uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:21:04.796245Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminzq8r\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminzq8r" level=info timestamp=2018-07-30T12:21:52.933666Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikq694 kind= uid=243d0654-93f3-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:21:52.933921Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikq694 kind= uid=243d0654-93f3-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:21:53.184669Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikq694\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikq694" level=info timestamp=2018-07-30T12:21:53.287652Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikq694\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikq694" Pod name: virt-handler-7gm44 Pod phase: Running level=info timestamp=2018-07-30T12:19:09.254509Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind= uid=96d8cba6-93f2-11e8-a6c7-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:19:09.259359Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind= uid=96d8cba6-93f2-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:19:09.261015Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4cwl6, existing: true\n" level=info timestamp=2018-07-30T12:19:09.261741Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-07-30T12:19:09.261856Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:19:09.262022Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind= uid=96d8cba6-93f2-11e8-a6c7-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:19:09.262954Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind= uid=96d8cba6-93f2-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:19:09.309879Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4cwl6, existing: false\n" level=info timestamp=2018-07-30T12:19:09.310030Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:19:09.311840Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:19:09.313213Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:19:24.008027Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4cwl6, existing: false\n" level=info timestamp=2018-07-30T12:19:24.009219Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:19:24.009922Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:19:24.010752Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-alternative name=testvmi4cwl6 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nnzxj Pod phase: Running level=error timestamp=2018-07-30T12:22:38.649534Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T12:22:38.649646Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvminzq8r" level=info timestamp=2018-07-30T12:22:44.608157Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T12:22:44.609769Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvminzq8r, existing: false\n" level=info timestamp=2018-07-30T12:22:44.610003Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:22:44.610183Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:22:44.611309Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:22:44.613388Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvminzq8r, existing: false\n" level=info timestamp=2018-07-30T12:22:44.613582Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:22:44.613734Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:22:44.613939Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:22:59.130668Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvminzq8r, existing: false\n" level=info timestamp=2018-07-30T12:22:59.131065Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:22:59.131450Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:22:59.131834Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvminzq8r kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmikq694-zzznb Pod phase: Running panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 10 [running]: io.copyBuffer(0x142d000, 0xc42000e018, 0x0, 0x0, 0xc421930000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc42000e018, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func1(0xc4200d6840, 0xc42008c240) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:264 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:261 +0x15f • Failure [180.731 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with ACPI and some grace period seconds /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:920 should result in vmi status succeeded [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:921 Timed out after 90.010s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1046 ------------------------------ STEP: Creating the VirtualMachineInstance level=info timestamp=2018-07-30T12:21:53.706036Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmikq694 kind=VirtualMachineInstance uid=243d0654-93f3-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmikq694-zzznb" • [SLOW TEST:52.537 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:945 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:946 ------------------------------ Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running 2018/07/30 12:26:20 http: TLS handshake error from 10.244.1.1:54100: EOF level=info timestamp=2018-07-30T12:26:29.017277Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:26:29.034416Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:26:29.358799Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:26:30 http: TLS handshake error from 10.244.1.1:54106: EOF level=info timestamp=2018-07-30T12:26:37.497218Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:26:37.502942Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:26:40 http: TLS handshake error from 10.244.1.1:54112: EOF level=info timestamp=2018-07-30T12:26:45.408437Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:26:47.489140Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:26:50 http: TLS handshake error from 10.244.1.1:54118: EOF level=info timestamp=2018-07-30T12:26:59.336696Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:27:00 http: TLS handshake error from 10.244.1.1:54124: EOF 2018/07/30 12:27:10 http: TLS handshake error from 10.244.1.1:54130: EOF level=info timestamp=2018-07-30T12:27:15.527654Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T12:20:14.702889Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisgp5d kind= uid=e9aca79a-93f2-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:20:14.703674Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisgp5d kind= uid=e9aca79a-93f2-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:21:04.690516Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminzq8r kind= uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:21:04.690752Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminzq8r kind= uid=077975a5-93f3-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:21:04.796245Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminzq8r\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminzq8r" level=info timestamp=2018-07-30T12:21:52.933666Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikq694 kind= uid=243d0654-93f3-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:21:52.933921Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikq694 kind= uid=243d0654-93f3-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:21:53.184669Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikq694\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikq694" level=info timestamp=2018-07-30T12:21:53.287652Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikq694\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikq694" level=info timestamp=2018-07-30T12:24:53.744063Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi65kmj kind= uid=9001d5a8-93f3-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:24:53.758509Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi65kmj kind= uid=9001d5a8-93f3-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:25:46.221428Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirjgpw kind= uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:25:46.223190Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirjgpw kind= uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:25:46.290916Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirjgpw\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirjgpw" level=info timestamp=2018-07-30T12:25:46.328292Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirjgpw\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirjgpw" Pod name: virt-handler-7gm44 Pod phase: Running level=error timestamp=2018-07-30T12:26:05.321346Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmi65kmj kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T12:26:05.321468Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmi65kmj" level=info timestamp=2018-07-30T12:26:09.196251Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmi65kmj kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T12:26:09.196923Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi65kmj, existing: false\n" level=info timestamp=2018-07-30T12:26:09.196986Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:26:09.198206Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi65kmj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:26:09.199049Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi65kmj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:26:09.199766Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi65kmj, existing: false\n" level=info timestamp=2018-07-30T12:26:09.199862Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:26:09.200012Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi65kmj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:26:09.200243Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi65kmj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:26:25.802279Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi65kmj, existing: false\n" level=info timestamp=2018-07-30T12:26:25.802587Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:26:25.802804Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi65kmj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:26:25.803032Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi65kmj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nnzxj Pod phase: Running level=info timestamp=2018-07-30T12:26:05.816937Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmirjgpw kind=Domain uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-30T12:26:05.873367Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-30T12:26:05.875144Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirjgpw kind= uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:26:05.875344Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmirjgpw, existing: true\n" level=info timestamp=2018-07-30T12:26:05.875411Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-30T12:26:05.875478Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:26:05.875596Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:26:05.875748Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmirjgpw kind= uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="No update processing required" level=info timestamp=2018-07-30T12:26:05.902055Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirjgpw kind= uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:26:05.906878Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmirjgpw, existing: true\n" level=info timestamp=2018-07-30T12:26:05.906995Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-30T12:26:05.907068Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-30T12:26:05.907115Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-30T12:26:05.907269Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmirjgpw kind= uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-30T12:26:05.918063Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmirjgpw kind= uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmirjgpw-nlgm7 Pod phase: Running Pod name: vmi-killerkmclp Pod phase: Succeeded • Failure [89.681 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:997 should be in Failed phase [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:998 Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:1021 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-07-30T12:25:46.793957Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmirjgpw kind=VirtualMachineInstance uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmirjgpw-nlgm7" level=info timestamp=2018-07-30T12:26:03.637954Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmirjgpw kind=VirtualMachineInstance uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmirjgpw-nlgm7" level=info timestamp=2018-07-30T12:26:05.558173Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmirjgpw kind=VirtualMachineInstance uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:26:05.586771Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmirjgpw kind=VirtualMachineInstance uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="VirtualMachineInstance started." STEP: Killing the VirtualMachineInstance level=info timestamp=2018-07-30T12:26:15.787236Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmirjgpw kind=VirtualMachineInstance uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmirjgpw-nlgm7" level=info timestamp=2018-07-30T12:26:15.787386Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmirjgpw kind=VirtualMachineInstance uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmirjgpw-nlgm7" level=info timestamp=2018-07-30T12:26:15.787957Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmirjgpw kind=VirtualMachineInstance uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:26:15.788116Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmirjgpw kind=VirtualMachineInstance uid=af4afe57-93f3-11e8-a6c7-525500d15501 msg="VirtualMachineInstance started." STEP: Checking that the VirtualMachineInstance has 'Failed' phase • [SLOW TEST:84.435 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:997 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:1025 ------------------------------ • [SLOW TEST:35.996 seconds] 2018/07/30 08:29:16 read closing down: EOF Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/30 08:29:52 read closing down: EOF • [SLOW TEST:35.996 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/30 08:32:48 read closing down: EOF • [SLOW TEST:232.826 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/30 08:36:55 read closing down: EOF • [SLOW TEST:239.843 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:46.324 seconds] Storage 2018/07/30 08:38:31 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 ------------------------------ • [SLOW TEST:45.956 seconds] 2018/07/30 08:39:17 read closing down: EOF Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 ------------------------------ • [SLOW TEST:36.714 seconds] Storage 2018/07/30 08:39:54 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 ------------------------------ • [SLOW TEST:131.429 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 2018/07/30 08:42:05 read closing down: EOF 2018/07/30 08:42:05 read closing down: EOF should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 ------------------------------ Pod name: disks-images-provider-qctjs Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-t6r7h Pod phase: Running copy all images to host mount directory Pod name: virt-api-7586947775-lfhnr Pod phase: Running 2018/07/30 12:44:10 http: TLS handshake error from 10.244.1.1:54760: EOF level=info timestamp=2018-07-30T12:44:19.774994Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:44:20 http: TLS handshake error from 10.244.1.1:54766: EOF level=info timestamp=2018-07-30T12:44:21.691856Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:44:29.395894Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:44:30 http: TLS handshake error from 10.244.1.1:54772: EOF level=info timestamp=2018-07-30T12:44:37.828954Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-30T12:44:37.833848Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:44:40 http: TLS handshake error from 10.244.1.1:54778: EOF level=info timestamp=2018-07-30T12:44:49.899189Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/30 12:44:50 http: TLS handshake error from 10.244.1.1:54784: EOF level=info timestamp=2018-07-30T12:44:51.821583Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-30T12:44:59.394295Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/30 12:45:00 http: TLS handshake error from 10.244.1.1:54790: EOF 2018/07/30 12:45:10 http: TLS handshake error from 10.244.1.1:54796: EOF Pod name: virt-controller-7d57d96b65-8gsdc Pod phase: Running level=info timestamp=2018-07-30T11:12:34.032471Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-lkdkn Pod phase: Running level=info timestamp=2018-07-30T12:38:31.372955Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6wb72 kind= uid=775b6697-93f5-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:39:17.339239Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi7tcpb kind= uid=92c13ad4-93f5-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:39:17.340163Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi7tcpb kind= uid=92c13ad4-93f5-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:39:17.477148Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi7tcpb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi7tcpb" level=info timestamp=2018-07-30T12:39:54.273971Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixr5pg kind= uid=a8c2aa3e-93f5-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:39:54.278007Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixr5pg kind= uid=a8c2aa3e-93f5-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:39:54.471784Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixr5pg\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixr5pg" level=info timestamp=2018-07-30T12:41:29.322020Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixr5pg kind= uid=e168c0ef-93f5-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:41:29.323879Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixr5pg kind= uid=e168c0ef-93f5-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:42:05.566867Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmic6qcb kind= uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:42:05.567791Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmic6qcb kind= uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:43:14.276412Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmic6qcb kind= uid=1ff83134-93f6-11e8-a6c7-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-30T12:43:14.279599Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmic6qcb kind= uid=1ff83134-93f6-11e8-a6c7-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-30T12:43:14.614535Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmic6qcb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmic6qcb" level=info timestamp=2018-07-30T12:43:14.640488Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmic6qcb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmic6qcb" Pod name: virt-handler-7gm44 Pod phase: Running level=error timestamp=2018-07-30T12:40:07.845931Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmi6wb72 kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-30T12:40:07.846075Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmi6wb72" level=info timestamp=2018-07-30T12:40:09.196399Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmi6wb72 kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-30T12:40:09.197064Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi6wb72, existing: false\n" level=info timestamp=2018-07-30T12:40:09.197912Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:40:09.198064Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6wb72 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:40:09.200247Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6wb72 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:40:09.200563Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi6wb72, existing: false\n" level=info timestamp=2018-07-30T12:40:09.200617Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:40:09.200717Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6wb72 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:40:09.200883Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6wb72 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:40:28.327416Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi6wb72, existing: false\n" level=info timestamp=2018-07-30T12:40:28.328272Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:40:28.328760Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6wb72 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:40:28.329215Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6wb72 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-nnzxj Pod phase: Running level=info timestamp=2018-07-30T12:43:14.658310Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:43:14.658478Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmic6qcb kind= uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:43:14.658846Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmic6qcb kind= uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:43:14.699635Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmic6qcb, existing: false\n" level=info timestamp=2018-07-30T12:43:14.699833Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:43:14.700059Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:43:14.700276Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:43:16.080244Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmic6qcb, existing: false\n" level=info timestamp=2018-07-30T12:43:16.080393Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:43:16.080592Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:43:16.080775Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-30T12:43:17.130676Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmixr5pg, existing: false\n" level=info timestamp=2018-07-30T12:43:17.130874Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-30T12:43:17.131075Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixr5pg kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-30T12:43:17.131361Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixr5pg kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmic6qcb-7qdnc Pod phase: Failed panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x488cf3] goroutine 8 [running]: io.copyBuffer(0x142d000, 0xc4200b4010, 0x0, 0x0, 0xc420274000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:400 +0x143 io.Copy(0x142d000, 0xc4200b4010, 0x0, 0x0, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:362 +0x5a kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor.func2(0xc420012160, 0xc42008c180) /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:272 +0xb4 created by kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util.ForkAndMonitor /root/go/src/kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/util/libvirt_helper.go:269 +0x191 • Failure [189.744 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 Timed out after 120.010s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1046 ------------------------------ STEP: Starting and stopping the VirtualMachineInstance number of times STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start level=info timestamp=2018-07-30T12:42:06.263808Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmic6qcb-ckcw4" level=info timestamp=2018-07-30T12:42:22.778651Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmic6qcb-ckcw4" level=info timestamp=2018-07-30T12:42:24.893544Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:42:24.922444Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="VirtualMachineInstance started." STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start level=info timestamp=2018-07-30T12:43:14.970729Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="Created virtual machine pod virt-launcher-testvmic6qcb-ckcw4" level=info timestamp=2018-07-30T12:43:14.970914Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmic6qcb-ckcw4" level=info timestamp=2018-07-30T12:43:14.971627Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="Deleted virtual machine pod virt-launcher-testvmic6qcb-ckcw4" level=info timestamp=2018-07-30T12:43:14.972063Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-30T12:43:14.972341Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmic6qcb kind=VirtualMachineInstance uid=f700ad49-93f5-11e8-a6c7-525500d15501 msg="VirtualMachineInstance started." • [SLOW TEST:5.005 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 with correct permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:51 should be allowed to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:52 ------------------------------ •••panic: test timed out after 1h30m0s goroutine 9991 [running]: testing.(*M).startAlarm.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1240 +0xfc created by time.goFunc /gimme/.gimme/versions/go1.10.linux.amd64/src/time/sleep.go:172 +0x44 goroutine 1 [chan receive, 90 minutes]: testing.(*T).Run(0xc420249e00, 0x139e76f, 0x9, 0x1430ca0, 0x4801e6) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:825 +0x301 testing.runTests.func1(0xc420249d10) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1063 +0x64 testing.tRunner(0xc420249d10, 0xc420883df8) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 testing.runTests(0xc4207886c0, 0x1d32a50, 0x1, 0x1, 0x412009) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1061 +0x2c4 testing.(*M).Run(0xc42094b980, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:978 +0x171 main.main() _testmain.go:44 +0x151 goroutine 35 [chan receive]: kubevirt.io/kubevirt/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x1d5e280) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:879 +0x8b created by kubevirt.io/kubevirt/vendor/github.com/golang/glog.init.0 /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:410 +0x203 goroutine 36 [syscall, 90 minutes]: os/signal.signal_recv(0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/sigqueue.go:139 +0xa6 os/signal.loop() /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:22 +0x22 created by os/signal.init.0 /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:28 +0x41 goroutine 5 [select]: kubevirt.io/kubevirt/pkg/kubecli.(*vmis).SerialConsole(0xc420d14e40, 0xc4207f47d0, 0xc, 0x6fc23ac00, 0xc420d14e40, 0x0, 0x0, 0x33) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:282 +0x175 kubevirt.io/kubevirt/tests.NewConsoleExpecter(0x14f1300, 0xc4203eab00, 0xc420936c80, 0x6fc23ac00, 0x0, 0x0, 0x0, 0xc9, 0x14b8900, 0xc4205e5e60, ...) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1155 +0x34c kubevirt.io/kubevirt/tests_test.glob..func2.2(0xc420936c80, 0x13adfe7, 0x16) /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:54 +0x25a kubevirt.io/kubevirt/tests_test.glob..func2.3.1.1.1() /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:70 +0x98 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc42042d920, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0x9c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc42042d920, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x13e kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc420112660, 0x14b6d40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7f kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc4206caf00, 0x0, 0x14b6d40, 0xc420109480) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:203 +0x648 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc4206caf00, 0x14b6d40, 0xc420109480) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xff kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc4206fc500, 0xc4206caf00, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10d kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc4206fc500, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x329 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc4206fc500, 0xb) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x11b kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc4200feaf0, 0x7fe927ee4de0, 0xc420249e00, 0x13a0d52, 0xb, 0xc420788700, 0x2, 0x2, 0x14d3600, 0xc420109480, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x27c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x14b7da0, 0xc420249e00, 0x13a0d52, 0xb, 0xc4207886e0, 0x2, 0x2, 0x2) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:221 +0x258 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x14b7da0, 0xc420249e00, 0x13a0d52, 0xb, 0xc42050f420, 0x1, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:209 +0xab kubevirt.io/kubevirt/tests_test.TestTests(0xc420249e00) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc420249e00, 0x1430ca0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:824 +0x2e0 goroutine 6 [chan receive, 90 minutes]: kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).registerForInterrupts(0xc4206fc500, 0xc4200deea0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:223 +0xd1 created by kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:60 +0x88 goroutine 7 [select, 90 minutes, locked to thread]: runtime.gopark(0x1432e78, 0x0, 0x139b291, 0x6, 0x18, 0x1) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/proc.go:291 +0x11a runtime.selectgo(0xc420094f50, 0xc4200def60) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/select.go:392 +0xe50 runtime.ensureSigM.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/signal_unix.go:549 +0x1f4 runtime.goexit() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/asm_amd64.s:2361 +0x1 goroutine 71 [IO wait]: internal/poll.runtime_pollWait(0x7fe927f6af00, 0x72, 0xc420f35850) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/netpoll.go:173 +0x57 internal/poll.(*pollDesc).wait(0xc42096e318, 0x72, 0xffffffffffffff00, 0x14b8f60, 0x1c497d0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:85 +0x9b internal/poll.(*pollDesc).waitRead(0xc42096e318, 0xc4205b8000, 0x8000, 0x8000) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:90 +0x3d internal/poll.(*FD).Read(0xc42096e300, 0xc4205b8000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_unix.go:157 +0x17d net.(*netFD).Read(0xc42096e300, 0xc4205b8000, 0x8000, 0x8000, 0x0, 0x8, 0x7ffb) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/fd_unix.go:202 +0x4f net.(*conn).Read(0xc42051c6b8, 0xc4205b8000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/net.go:176 +0x6a crypto/tls.(*block).readFromUntil(0xc4206f8fc0, 0x7fe9243d7410, 0xc42051c6b8, 0x5, 0xc42051c6b8, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:493 +0x96 crypto/tls.(*Conn).readRecord(0xc420979500, 0x1432f17, 0xc420979620, 0x20) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:595 +0xe0 crypto/tls.(*Conn).Read(0xc420979500, 0xc4207e1000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:1156 +0x100 bufio.(*Reader).Read(0xc420990cc0, 0xc42082c2d8, 0x9, 0x9, 0xc4212ed258, 0xc4206e5c00, 0xc420f35d10) /gimme/.gimme/versions/go1.10.linux.amd64/src/bufio/bufio.go:216 +0x238 io.ReadAtLeast(0x14b5b40, 0xc420990cc0, 0xc42082c2d8, 0x9, 0x9, 0x9, 0xc420f35ce0, 0xc420f35ce0, 0x406614) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:309 +0x86 io.ReadFull(0x14b5b40, 0xc420990cc0, 0xc42082c2d8, 0x9, 0x9, 0xc4212ed200, 0xc420f35d10, 0xc400004101) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:327 +0x58 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.readFrameHeader(0xc42082c2d8, 0x9, 0x9, 0x14b5b40, 0xc420990cc0, 0x0, 0xc400000000, 0x7ef9ad, 0xc420f35fb0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:237 +0x7b kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc42082c2a0, 0xc42045e780, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:492 +0xa4 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc420f35fb0, 0x1431bf8, 0xc420475fb0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1428 +0x8e kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc420382d00) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1354 +0x76 created by kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Transport).newClientConn /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:579 +0x651 goroutine 3733 [chan send, 44 minutes]: kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1.1(0x14f1300, 0xc42092a3c0, 0xc4201721b0, 0xc420e588a0, 0xc42051c918, 0xc42051c938) /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:81 +0x138 created by kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1 /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:73 +0x386 goroutine 9976 [sleep]: time.Sleep(0x3b9aca00) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/time.go:102 +0x166 kubevirt.io/kubevirt/pkg/kubecli.(*vmis).SerialConsole.func1(0xc420d14e40, 0xc4207f47d0, 0xc, 0xc4204115a9, 0xc420e599e0) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:266 +0x92 created by kubevirt.io/kubevirt/pkg/kubecli.(*vmis).SerialConsole /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:260 +0xd8 goroutine 8236 [chan send, 14 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc420416f60) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 9215 [chan send, 3 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4205c0510) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 4916 [chan send, 42 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4203e82a0) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 8082 [chan send, 15 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4206f8240) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 8486 [chan send, 11 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc42078ccc0) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 8647 [chan send, 9 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4208a4870) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 make: *** [functest] Error 2 + make cluster-down ./cluster/down.sh