+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/07/19 07:41:21 Waiting for host: 192.168.66.101:22 2018/07/19 07:41:24 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/19 07:41:32 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/19 07:41:37 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 32.507270 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:11b5ad84c18b07c6bf5be0c1a0bf38c6af560b14b90110e61ba359796a55fa29 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/19 07:42:25 Waiting for host: 192.168.66.102:22 2018/07/19 07:42:28 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/19 07:42:36 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/19 07:42:41 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/07/19 07:42:46 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 52s v1.10.3 node02 Ready 20s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 54s v1.10.3 node02 Ready 22s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:32847/kubevirt/virt-controller:devel Untagged: localhost:32847/kubevirt/virt-controller@sha256:d87248aee5fae7805fb15ff63f7823214d9df62fe44cb9200de7ab61e5f80b0a Deleted: sha256:ec240aa40c03a7eb63569f4d7aa4975f84ec567543c917795e256281334eb4de Deleted: sha256:928bc7dbb5fa903442a030365007f0a51379ba6f2aee35cfad0f93c98b31016e Deleted: sha256:8629e526d49d823aefe14c11261ba93cfef4d88d3d0994d98328796f7fc3ec90 Deleted: sha256:b89050b169bcebb29829c74208d6b3db24edfc3f53cd3002c749a7e3055d6b30 Untagged: localhost:32847/kubevirt/virt-launcher:devel Untagged: localhost:32847/kubevirt/virt-launcher@sha256:e5941f6e98e6a44f62c84eeec68a870f411e2fb285929863784628c197bc6c4a Deleted: sha256:fed08505abe9a544923296e3117c2679f661261945e1a4681948eb82a0b73d7f Deleted: sha256:05290600428fb566fbe1e0a5f12b83dca30cbc9ba85c64926da6243755e1e350 Deleted: sha256:76041aa7e053156f3ec6524ea6968e88e1917f506b157a772dc76f13861029a0 Deleted: sha256:26cc4200fdd22eec1d9d8b9c11a0720b03aea76531b079488dd65381d03be073 Deleted: sha256:281ff744b6e58bcf6da8965c1340aebd69b3fe4810f3e286372ec85608b1a5ce Deleted: sha256:68dff28a0c5960671a696fc8f1affdc300448ef25cb5d83c643f43d718dcdd04 Deleted: sha256:5ca13d6b06dd4000c0c85d50c27049bff5492fb34e5024d0c094b3bbdba521cc Deleted: sha256:196235b8dfb6ddc5b1a2b1c0ba94173f0b1c05e6edb1d051a0bdd95f31f152b1 Deleted: sha256:8c000a71f3d6e71dff2916c1738ffd7395f2aae6a1828925789189bf6fadd397 Deleted: sha256:5756758f7b2111a1bfafedc14787d08ef4189d7e75328764f2b8bffc3ec0f9fc Deleted: sha256:2dbb7bf560013914a5b941cbfe8a564fc49f97fdfbf45d5df7027358b15e666d Deleted: sha256:2831e811fb5e561d5607ea69acbc2863806717b6d614bf6886c41a45925f068d Untagged: localhost:32847/kubevirt/virt-handler:devel Untagged: localhost:32847/kubevirt/virt-handler@sha256:9ea4c192122d42008eb29298b5bd50ff3f6bfdd74d8234b43b1e24d3ef74ea72 Deleted: sha256:9f92cc6ceae52576deb25ac3b0a5f877a4ce22edc15d3357a1957aef8dd98be0 Deleted: sha256:58ac47a0cb584c064d01fc72681463d455c4d67758b06dbf2ada9513365d2c3a Deleted: sha256:16e90b3afca428222a02ace5affe29991055b9a8375105fa192aa9f9c0d255bf Deleted: sha256:89cb1141efc9f62981ab9379473f772066a84a8404a886ab31ce452eb1c2accc Untagged: localhost:32847/kubevirt/virt-api:devel Untagged: localhost:32847/kubevirt/virt-api@sha256:a1f2bf1e9b8fe417706afb479b785509c3db8ea990684692795ad3bb0db0f110 Deleted: sha256:9f20370362c5fb5cd71535cea57f96ce531cdfa5ab002fb9103859abd8ddca6f Deleted: sha256:e7bfa130a8d3433aedaf9ab85cd048b809a8274f88c41a0b60d24cd0fab7c9a5 Deleted: sha256:e889f2f776d804d0fb29c3c20476bf7b7f7881be48036f682b56ae7d8d7d6df9 Deleted: sha256:30804c47da2e7476b72a83c8478c34370d4215e09891fc50922d3411af07c498 Untagged: localhost:32847/kubevirt/subresource-access-test:devel Untagged: localhost:32847/kubevirt/subresource-access-test@sha256:7e1fb4c8e2bebafce469a31f835a468e5e129763ede405af886af749f7475528 Deleted: sha256:8239d35d437a908f3c0c47477c36443cf524ae13841be8e643bc997947f70be1 Deleted: sha256:77d00723f07a190244b5b48f3c5c208d4d87a952728a4d15b428a91af86aeeff Deleted: sha256:583161a34b65ca076905f64bc7a53c9f13a92cef6d15f3198672cb63414cc367 Deleted: sha256:dc147fb1984887310814ef29195e7a5748584b79d53d60f0fc49518f79ef3f27 Untagged: localhost:32847/kubevirt/example-hook-sidecar:devel Untagged: localhost:32847/kubevirt/example-hook-sidecar@sha256:41006d9b4c5138810c6c6462f889611d810a07290914b238793d33ac1dcbfea7 Deleted: sha256:486d1e45a4cdcf92a8f8da9a799da484564221a3a8f6b2a0d2716841d174739d Deleted: sha256:56f67686d20b97c49a3175fc73207e6e311c9f9576ebc8843aa0489a3616737b Deleted: sha256:fd25fc03fabc46c6a3dfef5c26128805904687c9ff85cc69fbd6048518417602 Deleted: sha256:5f4d73e2fcf7b5ab2ca4ad802791e7b3fa27d5fb3fa2d9329b950965d4a2176a sha256:8314c812ee3200233db076e79036b39759d30fc3dd7fe921b323b19b14306fd6 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:8314c812ee3200233db076e79036b39759d30fc3dd7fe921b323b19b14306fd6 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 1ac62e99a9e7 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> c7b69424a0c5 Step 5/8 : USER 1001 ---> Using cache ---> e60ed5d8e78a Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 8aca3cea85d4 Removing intermediate container cd29df62d6ac Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in df83784831be ---> be966dd0777e Removing intermediate container df83784831be Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-controller" '' ---> Running in aa92edc72e13 ---> 12147e32596f Removing intermediate container aa92edc72e13 Successfully built 12147e32596f Sending build context to Docker daemon 41.03 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 65f548d54a2e Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 04ae26de19c4 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> 53f23bd4eece Removing intermediate container 4687428b74d0 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 4a029a36f923 Removing intermediate container a7ec4716ee02 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 7dfaff593ec8  ---> 36583d4974d3 Removing intermediate container 7dfaff593ec8 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in b7e54f57db8b  ---> 730a8c1e94bb Removing intermediate container b7e54f57db8b Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> bba5127a81dc Removing intermediate container 484973dd4290 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in e9370203c112 ---> 1f45e0191f5a Removing intermediate container e9370203c112 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-launcher" '' ---> Running in 1ef2e2fb3960 ---> 2ca21aa12149 Removing intermediate container 1ef2e2fb3960 Successfully built 2ca21aa12149 Sending build context to Docker daemon 40.1 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 7359d98545b9 Removing intermediate container 3c29b99fbd4a Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in e41f370097b4 ---> 7738c8d3cb31 Removing intermediate container e41f370097b4 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-handler" '' ---> Running in deba50c846bd ---> 12eeae8eb316 Removing intermediate container deba50c846bd Successfully built 12eeae8eb316 Sending build context to Docker daemon 37.03 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 830d77e8a3bb Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 7075b0c3cdfd Step 5/8 : USER 1001 ---> Using cache ---> 4e21374fdc1d Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 89376e702839 Removing intermediate container cc9bf706c1fd Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in c1e5fef0087b ---> c5117ae51fa7 Removing intermediate container c1e5fef0087b Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-api" '' ---> Running in 5e1d9d5b1b01 ---> 0502e4ac8746 Removing intermediate container 5e1d9d5b1b01 Successfully built 0502e4ac8746 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/7 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 3f571283fdaa Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 2722b024d103 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 8458081a089b Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> e53dc9d18b84 Successfully built e53dc9d18b84 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/5 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 006e94a74def Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "vm-killer" '' ---> Using cache ---> abbd7f88954e Successfully built abbd7f88954e Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 496290160351 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 081acc82039b Step 3/7 : ENV container docker ---> Using cache ---> 87a43203841c Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> bbc83781e0a9 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> c588d7a778a6 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> e28b44b64988 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "registry-disk-v1alpha" '' ---> Using cache ---> 7904d685350a Successfully built 7904d685350a Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33238/kubevirt/registry-disk-v1alpha:devel ---> 7904d685350a Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 863f57f04ecf Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 66ab826322ff Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> f470c166f252 Successfully built f470c166f252 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33238/kubevirt/registry-disk-v1alpha:devel ---> 7904d685350a Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 29da26bd5e2f Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 3c93f3a815e8 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> 701cba55fca9 Successfully built 701cba55fca9 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33238/kubevirt/registry-disk-v1alpha:devel ---> 7904d685350a Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 29da26bd5e2f Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 28dda1ba8041 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> f1870c9233d2 Successfully built f1870c9233d2 Sending build context to Docker daemon 34.04 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 939ec18dc9a4 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 52b6bf037d32 Step 5/8 : USER 1001 ---> Using cache ---> 1e1560e0af32 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 64c55dea7a5a Removing intermediate container 979d77e6738e Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 6e55c1d9eaff ---> dd7fff40d5ec Removing intermediate container 6e55c1d9eaff Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "subresource-access-test" '' ---> Running in c5932119a317 ---> 0726baf9d430 Removing intermediate container c5932119a317 Successfully built 0726baf9d430 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/9 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 3129352c97b1 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> fbcd5a15f974 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 6e560dc836a0 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 8a916bbc2352 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 72d00ac082db Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "winrmcli" '' ---> Using cache ---> 58d0eb28f968 Successfully built 58d0eb28f968 Sending build context to Docker daemon 35.17 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0ae71e3c9e56 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> c9e50d7b350d Removing intermediate container 0ae27026451c Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 82724200b691 ---> 298212fed83e Removing intermediate container 82724200b691 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Running in 08cd4a350640 ---> f2a8b54d32f2 Removing intermediate container 08cd4a350640 Successfully built f2a8b54d32f2 hack/build-docker.sh push The push refers to a repository [localhost:33238/kubevirt/virt-controller] 724d6df7660a: Preparing d07058c760ad: Preparing 891e1e4ef82a: Preparing d07058c760ad: Pushed 724d6df7660a: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:dfab0a4604b53e9241ea95456f5ccf3834e9323ce32a7355e2e763369ba03adb size: 949 The push refers to a repository [localhost:33238/kubevirt/virt-launcher] 9272d844bc9a: Preparing 6b7fe12ff821: Preparing f7ead428a519: Preparing b076e8bc2439: Preparing 69134330f624: Preparing 53f12636d41e: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 69134330f624: Waiting 53f12636d41e: Waiting 5eefb9960a36: Preparing da38cf808aa5: Waiting 891e1e4ef82a: Preparing 5eefb9960a36: Waiting fa6154170bf5: Waiting 186d8b3e4fd8: Waiting 891e1e4ef82a: Waiting 6b7fe12ff821: Pushed 9272d844bc9a: Pushed b076e8bc2439: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed f7ead428a519: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 53f12636d41e: Pushed 69134330f624: Pushed 5eefb9960a36: Pushed devel: digest: sha256:fc8e0ce52810a5009af648e94cbfead50fafd11aea834ab9bd511255f9058b09 size: 2828 The push refers to a repository [localhost:33238/kubevirt/virt-handler] cbf0980feae6: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher cbf0980feae6: Pushed devel: digest: sha256:fb63d6d03b4d1b730e3874f9a200bca737514eebfb45bc6dea10d134f31cd83e size: 741 The push refers to a repository [localhost:33238/kubevirt/virt-api] 5367a12493f4: Preparing 25755ffecaf3: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 25755ffecaf3: Pushed 5367a12493f4: Pushed devel: digest: sha256:3b9c90003d1f571d4e25d8513fbf01a52b5d3409767c58a377b361837e5f10a0 size: 948 The push refers to a repository [localhost:33238/kubevirt/disks-images-provider] 5ffe52947a94: Preparing a1bc751fc8a2: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 5ffe52947a94: Pushed a1bc751fc8a2: Pushed devel: digest: sha256:c2d0765598980ca6f1d1a5736ee22c605ea53da81e606e3eed24801e4ca4e13a size: 948 The push refers to a repository [localhost:33238/kubevirt/vm-killer] 3a82b543c335: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 3a82b543c335: Pushed devel: digest: sha256:254cd3950a5f4bd34a623cf38213d09255a5c7cd6ddef30f207740ab1034bdff size: 740 The push refers to a repository [localhost:33238/kubevirt/registry-disk-v1alpha] cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing cb3d1019d03e: Pushed 626899eeec02: Pushed 132d61a890c5: Pushed devel: digest: sha256:17321355a332ba2c8d32a9fddec1d8f86776a684226731c6fc1ddc8f109e77ba size: 948 The push refers to a repository [localhost:33238/kubevirt/cirros-registry-disk-demo] 67b3e3a78aef: Preparing cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing 132d61a890c5: Mounted from kubevirt/registry-disk-v1alpha cb3d1019d03e: Mounted from kubevirt/registry-disk-v1alpha 626899eeec02: Mounted from kubevirt/registry-disk-v1alpha 67b3e3a78aef: Pushed devel: digest: sha256:e1dbbbf9e41aaa09cc9268b9aecd61e0ed5ab869157f1cba3fbfa2147e67fddf size: 1160 The push refers to a repository [localhost:33238/kubevirt/fedora-cloud-registry-disk-demo] 767fd9937737: Preparing cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing 132d61a890c5: Mounted from kubevirt/cirros-registry-disk-demo 626899eeec02: Mounted from kubevirt/cirros-registry-disk-demo cb3d1019d03e: Mounted from kubevirt/cirros-registry-disk-demo 767fd9937737: Pushed devel: digest: sha256:00d24cd97552e1b5bfd0a63b90fdbbed5ee12619fe1fe627abfaee4a8b9223b2 size: 1161 The push refers to a repository [localhost:33238/kubevirt/alpine-registry-disk-demo] 894e337ab44f: Preparing cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing cb3d1019d03e: Mounted from kubevirt/fedora-cloud-registry-disk-demo 132d61a890c5: Mounted from kubevirt/fedora-cloud-registry-disk-demo 626899eeec02: Mounted from kubevirt/fedora-cloud-registry-disk-demo 894e337ab44f: Pushed devel: digest: sha256:98ba0aa164e04e71e38ea4950bce7be0359551f9f4edad3be0824edf38c6b52d size: 1160 The push refers to a repository [localhost:33238/kubevirt/subresource-access-test] eaacc0365b3e: Preparing 5c35b999e0e4: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 5c35b999e0e4: Pushed eaacc0365b3e: Pushed devel: digest: sha256:41b27ad23ce22135f50a695d231e3a693850526955bb9ea2832b3cb3c52c42c1 size: 948 The push refers to a repository [localhost:33238/kubevirt/winrmcli] d8f4160f7568: Preparing b34315236250: Preparing b4a3c0429828: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test d8f4160f7568: Pushed b4a3c0429828: Pushed b34315236250: Pushed devel: digest: sha256:3cc62df1ec90a099c629bcc3708704a96a49df1e8cbfde7228fa8bf803b7a57e size: 1165 The push refers to a repository [localhost:33238/kubevirt/example-hook-sidecar] fd9cab1d8a39: Preparing 39bae602f753: Preparing fd9cab1d8a39: Pushed 39bae602f753: Pushed devel: digest: sha256:ba622ac926a69e23dc1337e249e1f2fdcb92a54fee24194b199a10a43a799ba3 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-66-gc7e8a1b ++ KUBEVIRT_VERSION=v0.7.0-66-gc7e8a1b + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33238/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-66-gc7e8a1b ++ KUBEVIRT_VERSION=v0.7.0-66-gc7e8a1b + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33238/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-release ]] + [[ k8s-1.10.3-release =~ .*-dev ]] + [[ k8s-1.10.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created service "virt-api" created deployment.extensions "virt-api" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79764579-gb9bw 0/1 ContainerCreating 0 3s virt-api-7d79764579-w2776 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-m92tx 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-mwzlh 0/1 ContainerCreating 0 3s virt-handler-x8j2m 0/1 ContainerCreating 0 3s virt-handler-zhg92 0/1 ContainerCreating 0 3s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-bzw9c 0/1 ContainerCreating 0 0s disks-images-provider-z56nl 0/1 Pending 0 0s virt-api-7d79764579-gb9bw 0/1 ContainerCreating 0 4s virt-api-7d79764579-w2776 0/1 ContainerCreating 0 4s virt-controller-7d57d96b65-m92tx 0/1 ContainerCreating 0 4s virt-controller-7d57d96b65-mwzlh 0/1 ContainerCreating 0 4s virt-handler-x8j2m 0/1 ContainerCreating 0 4s virt-handler-zhg92 0/1 ContainerCreating 0 4s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n 'virt-api-7d79764579-gb9bw 0/1 Error 1 38s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running virt-api-7d79764579-gb9bw 0/1 CrashLoopBackOff 1 39s + sleep 30 + current_time=60 + '[' 60 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-bzw9c 1/1 Running 0 1m disks-images-provider-z56nl 1/1 Running 0 1m etcd-node01 1/1 Running 0 14m kube-apiserver-node01 1/1 Running 0 14m kube-controller-manager-node01 1/1 Running 0 14m kube-dns-86f4d74b45-8m7qw 3/3 Running 0 14m kube-flannel-ds-7l5tn 1/1 Running 0 14m kube-flannel-ds-s7pg7 1/1 Running 0 14m kube-proxy-2btgv 1/1 Running 0 14m kube-proxy-qx6b6 1/1 Running 0 14m kube-scheduler-node01 1/1 Running 0 14m virt-api-7d79764579-gb9bw 1/1 Running 2 1m virt-api-7d79764579-w2776 1/1 Running 0 1m virt-controller-7d57d96b65-m92tx 1/1 Running 0 1m virt-controller-7d57d96b65-mwzlh 1/1 Running 0 1m virt-handler-x8j2m 1/1 Running 0 1m virt-handler-zhg92 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + [[ k8s-1.10.3-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:8314c812ee3200233db076e79036b39759d30fc3dd7fe921b323b19b14306fd6 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1531987068 Will run 141 of 141 specs • ------------------------------ • [SLOW TEST:17.868 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:76 ------------------------------ • [SLOW TEST:19.957 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:82 ------------------------------ •••• ------------------------------ • [SLOW TEST:33.861 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:29.066 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:16.789 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:203 ------------------------------ • [SLOW TEST:18.656 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:233 ------------------------------ • [SLOW TEST:36.117 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:281 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:282 ------------------------------ • [SLOW TEST:26.125 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:304 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:305 ------------------------------ • [SLOW TEST:6.364 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:335 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:336 ------------------------------ • [SLOW TEST:122.795 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:366 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:405 ------------------------------ S [SKIPPING] [0.284 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:458 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:463 ------------------------------ S [SKIPPING] [0.187 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:458 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:463 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.189 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:519 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:531 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:527 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.121 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:519 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:568 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:527 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.231 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:519 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:612 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:527 ------------------------------ •••• ------------------------------ • [SLOW TEST:20.428 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:764 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:765 ------------------------------ • [SLOW TEST:21.438 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:796 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:797 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:798 ------------------------------ • [SLOW TEST:22.850 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:796 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:821 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:822 ------------------------------ • [SLOW TEST:31.492 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:873 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:874 ------------------------------ • [SLOW TEST:27.644 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:873 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:901 ------------------------------ • [SLOW TEST:93.697 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:7.616 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:18.919 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ •• ------------------------------ • [SLOW TEST:5.689 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 ------------------------------ • ------------------------------ • [SLOW TEST:20.384 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:18.849 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:19.613 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ • ------------------------------ • [SLOW TEST:49.438 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 ------------------------------ • [SLOW TEST:53.244 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 ------------------------------ • [SLOW TEST:36.603 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 ------------------------------ • ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.020 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.012 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.017 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.042 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.072 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.017 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 should succeed to generate a VM JSON file using oc-process command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:150 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 should succeed to create a VM using oc-create command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:156 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 should succeed to launch a VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:161 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 with given VMI from the VM /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:163 should succeed to terminate the VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:166 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachineinstance testvmiwmlrv • [SLOW TEST:51.067 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service node-port-vm successfully exposed for virtualmachineinstance testvmiwmlrv • [SLOW TEST:9.412 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:98 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:103 ------------------------------ Service cluster-ip-udp-vm successfully exposed for virtualmachineinstance testvmixrdzm • [SLOW TEST:48.635 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:147 Should expose a ClusterIP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:151 ------------------------------ Service node-port-udp-vm successfully exposed for virtualmachineinstance testvmixrdzm • [SLOW TEST:8.518 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:179 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:184 ------------------------------ Service cluster-ip-vmrs successfully exposed for vmirs replicaset6zljs • [SLOW TEST:64.884 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:227 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:260 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:264 ------------------------------ Service cluster-ip-ovm successfully exposed for virtualmachine testvmi46zdt • [SLOW TEST:48.411 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an Offline VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:292 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:336 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:337 ------------------------------ • [SLOW TEST:33.760 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ volumedisk0 compute • [SLOW TEST:34.799 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 ------------------------------ • [SLOW TEST:17.931 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.250 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:160 ------------------------------ • ------------------------------ • [SLOW TEST:78.459 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:277 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:278 ------------------------------ • [SLOW TEST:79.301 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:305 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:306 ------------------------------ • [SLOW TEST:44.063 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:326 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:349 ------------------------------ • ------------------------------ • [SLOW TEST:5.571 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 Without permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:56 should not be able to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:57 ------------------------------ •• ------------------------------ • [SLOW TEST:17.112 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ ••••••••••••• ------------------------------ • [SLOW TEST:147.987 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:18.163 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:35.942 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ •• ------------------------------ • [SLOW TEST:18.483 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ • [SLOW TEST:6.317 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should remove VirtualMachineInstance once the VMI is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:204 ------------------------------ • ------------------------------ • [SLOW TEST:27.821 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:97.849 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:42.411 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ • [SLOW TEST:179.288 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 ------------------------------ • [SLOW TEST:77.582 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ • [SLOW TEST:343.259 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 ------------------------------ • [SLOW TEST:17.810 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ • [SLOW TEST:99.706 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ • [SLOW TEST:35.127 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:31.939 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:103.237 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:131.608 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:48.269 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 ------------------------------ • [SLOW TEST:51.098 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 ------------------------------ • [SLOW TEST:33.917 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 ------------------------------ • [SLOW TEST:80.790 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 ------------------------------ • [SLOW TEST:116.279 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 ------------------------------ • [SLOW TEST:47.479 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 ------------------------------ • [SLOW TEST:95.829 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 ------------------------------ • [SLOW TEST:55.133 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:118 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:119 ------------------------------ • [SLOW TEST:44.701 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:162 ------------------------------ • [SLOW TEST:14.729 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:13.875 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:13.865 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.019 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:12:24.818828Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:12:34 http: TLS handshake error from 10.244.0.1:55240: EOF level=info timestamp=2018-07-19T09:12:40.343095Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:12:40.361188Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:12:44 http: TLS handshake error from 10.244.0.1:55270: EOF level=info timestamp=2018-07-19T09:12:47.919376Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:12:54 http: TLS handshake error from 10.244.0.1:55300: EOF 2018/07/19 09:13:04 http: TLS handshake error from 10.244.0.1:55330: EOF level=info timestamp=2018-07-19T09:13:10.519493Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:13:10.522239Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:13:14 http: TLS handshake error from 10.244.0.1:55360: EOF level=info timestamp=2018-07-19T09:13:17.848818Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:13:24 http: TLS handshake error from 10.244.0.1:55390: EOF level=info timestamp=2018-07-19T09:13:24.825428Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:13:24.829817Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T08:15:04.601876Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-07-19T08:55:22.759066Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-07-19T08:55:22.769684Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-07-19T08:55:22.769793Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-07-19T08:55:22.769842Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-07-19T08:55:22.769885Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-07-19T08:55:22.769926Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-07-19T08:55:22.769972Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-07-19T08:55:22.773000Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-19T08:55:22.807082Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-19T08:55:22.809478Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-19T08:55:22.809664Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-19T08:55:22.809793Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T08:24:14.234557Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-19T08:24:14.248689Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.477227Z pos=vm.go:342 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-19T08:24:14.479164Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-19T08:24:14.482440Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="grace period expired, killing deleted VirtualMachineInstance testvmifgfbl" level=info timestamp=2018-07-19T08:24:14.705847Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T08:24:14.707117Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T08:24:14.705824Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.710692Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:14.711637Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.729092Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T08:24:15.203045Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:15.204488Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:15.205931Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:15.206134Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure [1141.960 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 Expected error: : 180000000000 expect: timer expired after 180 seconds not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:64 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-07-19T08:54:40.921343Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmiwmvcc-jz6cl" level=info timestamp=2018-07-19T08:54:56.947084Z pos=utils.go:243 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmiwmvcc-jz6cl" level=info timestamp=2018-07-19T08:54:58.773192Z pos=utils.go:243 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-07-19T08:54:58.802220Z pos=utils.go:243 component=tests msg="VirtualMachineInstance started." STEP: Expecting the VirtualMachineInstance console level=info timestamp=2018-07-19T08:57:58.967893Z pos=utils.go:1249 component=tests namespace=kubevirt-test-default name=testvmiwmvcc kind=VirtualMachineInstance uid= msg="Login: [{2 \r\n\r\n\r\nISOLINUX 6.04 6.04-pre1 Copyright (C) 1994-2015 H. Peter Anvin et al\r\nboot: \u001b[?7h\r\n []}]" Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:11:59 http: TLS handshake error from 10.244.0.1:53342: EOF 2018/07/19 09:12:09 http: TLS handshake error from 10.244.0.1:53372: EOF 2018/07/19 09:12:19 http: TLS handshake error from 10.244.0.1:53402: EOF 2018/07/19 09:12:29 http: TLS handshake error from 10.244.0.1:53432: EOF 2018/07/19 09:12:39 http: TLS handshake error from 10.244.0.1:53462: EOF 2018/07/19 09:12:49 http: TLS handshake error from 10.244.0.1:53492: EOF 2018/07/19 09:12:59 http: TLS handshake error from 10.244.0.1:53522: EOF 2018/07/19 09:13:09 http: TLS handshake error from 10.244.0.1:53552: EOF 2018/07/19 09:13:19 http: TLS handshake error from 10.244.0.1:53582: EOF 2018/07/19 09:13:29 http: TLS handshake error from 10.244.0.1:53614: EOF 2018/07/19 09:13:39 http: TLS handshake error from 10.244.0.1:53650: EOF 2018/07/19 09:13:49 http: TLS handshake error from 10.244.0.1:53682: EOF 2018/07/19 09:13:59 http: TLS handshake error from 10.244.0.1:53712: EOF 2018/07/19 09:14:09 http: TLS handshake error from 10.244.0.1:53742: EOF 2018/07/19 09:14:19 http: TLS handshake error from 10.244.0.1:53776: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:13:17.848818Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:13:24 http: TLS handshake error from 10.244.0.1:55390: EOF level=info timestamp=2018-07-19T09:13:24.825428Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:13:24.829817Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:13:34 http: TLS handshake error from 10.244.0.1:55426: EOF level=info timestamp=2018-07-19T09:13:40.697426Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:13:40.699716Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:13:44 http: TLS handshake error from 10.244.0.1:55460: EOF level=info timestamp=2018-07-19T09:13:47.857976Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:13:54 http: TLS handshake error from 10.244.0.1:55490: EOF 2018/07/19 09:14:04 http: TLS handshake error from 10.244.0.1:55520: EOF level=info timestamp=2018-07-19T09:14:10.865198Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:14:10.867879Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:14:14 http: TLS handshake error from 10.244.0.1:55552: EOF level=info timestamp=2018-07-19T09:14:17.955714Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T08:15:04.601876Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-07-19T08:55:22.759066Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-07-19T08:55:22.769684Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-07-19T08:55:22.769793Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-07-19T08:55:22.769842Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-07-19T08:55:22.769885Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-07-19T08:55:22.769926Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-07-19T08:55:22.769972Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-07-19T08:55:22.773000Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-19T08:55:22.807082Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-19T08:55:22.809478Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-19T08:55:22.809664Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-19T08:55:22.809793Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-19T09:14:12.966871Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:12.976025Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T08:24:14.234557Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-19T08:24:14.248689Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.477227Z pos=vm.go:342 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-19T08:24:14.479164Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-19T08:24:14.482440Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="grace period expired, killing deleted VirtualMachineInstance testvmifgfbl" level=info timestamp=2018-07-19T08:24:14.705847Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T08:24:14.707117Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T08:24:14.705824Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.710692Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:14.711637Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.729092Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T08:24:15.203045Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:15.204488Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:15.205931Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:15.206134Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmis6jxk-gz75n Pod phase: Running level=info timestamp=2018-07-19T09:14:17.650213Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:14:17.650450Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:14:17.652347Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.411 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach [BeforeEach] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc42072de60>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:12:39 http: TLS handshake error from 10.244.0.1:53462: EOF 2018/07/19 09:12:49 http: TLS handshake error from 10.244.0.1:53492: EOF 2018/07/19 09:12:59 http: TLS handshake error from 10.244.0.1:53522: EOF 2018/07/19 09:13:09 http: TLS handshake error from 10.244.0.1:53552: EOF 2018/07/19 09:13:19 http: TLS handshake error from 10.244.0.1:53582: EOF 2018/07/19 09:13:29 http: TLS handshake error from 10.244.0.1:53614: EOF 2018/07/19 09:13:39 http: TLS handshake error from 10.244.0.1:53650: EOF 2018/07/19 09:13:49 http: TLS handshake error from 10.244.0.1:53682: EOF 2018/07/19 09:13:59 http: TLS handshake error from 10.244.0.1:53712: EOF 2018/07/19 09:14:09 http: TLS handshake error from 10.244.0.1:53742: EOF 2018/07/19 09:14:19 http: TLS handshake error from 10.244.0.1:53776: EOF 2018/07/19 09:14:29 http: TLS handshake error from 10.244.0.1:53812: EOF 2018/07/19 09:14:39 http: TLS handshake error from 10.244.0.1:53842: EOF 2018/07/19 09:14:49 http: TLS handshake error from 10.244.0.1:53872: EOF 2018/07/19 09:14:59 http: TLS handshake error from 10.244.0.1:53904: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:13:47.857976Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:13:54 http: TLS handshake error from 10.244.0.1:55490: EOF 2018/07/19 09:14:04 http: TLS handshake error from 10.244.0.1:55520: EOF level=info timestamp=2018-07-19T09:14:10.865198Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:14:10.867879Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:14:14 http: TLS handshake error from 10.244.0.1:55552: EOF level=info timestamp=2018-07-19T09:14:17.955714Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:14:24 http: TLS handshake error from 10.244.0.1:55588: EOF 2018/07/19 09:14:34 http: TLS handshake error from 10.244.0.1:55620: EOF level=info timestamp=2018-07-19T09:14:41.031790Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:14:41.032463Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:14:44 http: TLS handshake error from 10.244.0.1:55650: EOF level=info timestamp=2018-07-19T09:14:47.859928Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:14:54 http: TLS handshake error from 10.244.0.1:55680: EOF 2018/07/19 09:15:04 http: TLS handshake error from 10.244.0.1:55714: EOF Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T08:55:22.769793Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-07-19T08:55:22.769842Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-07-19T08:55:22.769885Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-07-19T08:55:22.769926Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-07-19T08:55:22.769972Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-07-19T08:55:22.773000Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-19T08:55:22.807082Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-19T08:55:22.809478Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-19T08:55:22.809664Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-19T08:55:22.809793Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-19T09:14:12.966871Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:12.976025Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.414119Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:58.417633Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.798147Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminp4cm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminp4cm" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T08:24:14.234557Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-19T08:24:14.248689Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.477227Z pos=vm.go:342 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-19T08:24:14.479164Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-19T08:24:14.482440Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="grace period expired, killing deleted VirtualMachineInstance testvmifgfbl" level=info timestamp=2018-07-19T08:24:14.705847Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T08:24:14.707117Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T08:24:14.705824Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.710692Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:14.711637Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.729092Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T08:24:15.203045Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:15.204488Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:15.205931Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:15.206134Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvminp4cm-xhvqx Pod phase: Running level=info timestamp=2018-07-19T09:15:03.011351Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:15:03.011687Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:15:03.014777Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.474 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach [BeforeEach] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance with pod network connectivity explicitly set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc42085fef0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:13:29 http: TLS handshake error from 10.244.0.1:53614: EOF 2018/07/19 09:13:39 http: TLS handshake error from 10.244.0.1:53650: EOF 2018/07/19 09:13:49 http: TLS handshake error from 10.244.0.1:53682: EOF 2018/07/19 09:13:59 http: TLS handshake error from 10.244.0.1:53712: EOF 2018/07/19 09:14:09 http: TLS handshake error from 10.244.0.1:53742: EOF 2018/07/19 09:14:19 http: TLS handshake error from 10.244.0.1:53776: EOF 2018/07/19 09:14:29 http: TLS handshake error from 10.244.0.1:53812: EOF 2018/07/19 09:14:39 http: TLS handshake error from 10.244.0.1:53842: EOF 2018/07/19 09:14:49 http: TLS handshake error from 10.244.0.1:53872: EOF 2018/07/19 09:14:59 http: TLS handshake error from 10.244.0.1:53904: EOF 2018/07/19 09:15:09 http: TLS handshake error from 10.244.0.1:53940: EOF 2018/07/19 09:15:19 http: TLS handshake error from 10.244.0.1:53972: EOF 2018/07/19 09:15:29 http: TLS handshake error from 10.244.0.1:54002: EOF 2018/07/19 09:15:39 http: TLS handshake error from 10.244.0.1:54032: EOF 2018/07/19 09:15:49 http: TLS handshake error from 10.244.0.1:54066: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:14:47.859928Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:14:54 http: TLS handshake error from 10.244.0.1:55680: EOF 2018/07/19 09:15:04 http: TLS handshake error from 10.244.0.1:55714: EOF level=info timestamp=2018-07-19T09:15:11.112635Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:15:11.165632Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:15:14 http: TLS handshake error from 10.244.0.1:55750: EOF level=info timestamp=2018-07-19T09:15:17.886516Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:15:24 http: TLS handshake error from 10.244.0.1:55780: EOF level=info timestamp=2018-07-19T09:15:24.835696Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:15:24.839714Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:15:34 http: TLS handshake error from 10.244.0.1:55810: EOF level=info timestamp=2018-07-19T09:15:41.292890Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:15:41.328844Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:15:44 http: TLS handshake error from 10.244.0.1:55842: EOF level=info timestamp=2018-07-19T09:15:47.903084Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T08:55:22.769926Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-07-19T08:55:22.769972Z pos=virtinformers.go:104 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-07-19T08:55:22.773000Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-07-19T08:55:22.807082Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-19T08:55:22.809478Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-19T08:55:22.809664Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-19T08:55:22.809793Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-19T09:14:12.966871Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:12.976025Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.414119Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:58.417633Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.798147Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminp4cm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminp4cm" level=info timestamp=2018-07-19T09:15:43.875545Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:15:43.886661Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:15:44.256149Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirvxtn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirvxtn" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T08:24:14.234557Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-19T08:24:14.248689Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.477227Z pos=vm.go:342 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-19T08:24:14.479164Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-19T08:24:14.482440Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="grace period expired, killing deleted VirtualMachineInstance testvmifgfbl" level=info timestamp=2018-07-19T08:24:14.705847Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T08:24:14.707117Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T08:24:14.705824Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.710692Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:14.711637Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.729092Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T08:24:15.203045Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:15.204488Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:15.205931Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:15.206134Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvminp4cm-xhvqx Pod phase: Failed level=info timestamp=2018-07-19T09:15:03.011351Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:15:03.011687Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:15:03.014777Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-19T09:15:13.033055Z pos=libvirt.go:271 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-19T09:15:13.093775Z pos=virt-launcher.go:143 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvminp4cm" level=info timestamp=2018-07-19T09:15:13.096545Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-19T09:15:13.097190Z pos=virt-launcher.go:60 component=virt-launcher msg="Marked as ready" caught signal virt-launcher exited with code 127 Pod name: virt-launcher-testvmirvxtn-zc9kw Pod phase: Running level=info timestamp=2018-07-19T09:15:48.412915Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:15:48.413114Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:15:48.414824Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.548 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach [BeforeEach] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc420140a20>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:14:09 http: TLS handshake error from 10.244.0.1:53742: EOF 2018/07/19 09:14:19 http: TLS handshake error from 10.244.0.1:53776: EOF 2018/07/19 09:14:29 http: TLS handshake error from 10.244.0.1:53812: EOF 2018/07/19 09:14:39 http: TLS handshake error from 10.244.0.1:53842: EOF 2018/07/19 09:14:49 http: TLS handshake error from 10.244.0.1:53872: EOF 2018/07/19 09:14:59 http: TLS handshake error from 10.244.0.1:53904: EOF 2018/07/19 09:15:09 http: TLS handshake error from 10.244.0.1:53940: EOF 2018/07/19 09:15:19 http: TLS handshake error from 10.244.0.1:53972: EOF 2018/07/19 09:15:29 http: TLS handshake error from 10.244.0.1:54002: EOF 2018/07/19 09:15:39 http: TLS handshake error from 10.244.0.1:54032: EOF 2018/07/19 09:15:49 http: TLS handshake error from 10.244.0.1:54066: EOF 2018/07/19 09:15:59 http: TLS handshake error from 10.244.0.1:54102: EOF 2018/07/19 09:16:09 http: TLS handshake error from 10.244.0.1:54132: EOF 2018/07/19 09:16:19 http: TLS handshake error from 10.244.0.1:54162: EOF 2018/07/19 09:16:29 http: TLS handshake error from 10.244.0.1:54192: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:15:24.835696Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:15:24.839714Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:15:34 http: TLS handshake error from 10.244.0.1:55810: EOF level=info timestamp=2018-07-19T09:15:41.292890Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:15:41.328844Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:15:44 http: TLS handshake error from 10.244.0.1:55842: EOF level=info timestamp=2018-07-19T09:15:47.903084Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:15:54 http: TLS handshake error from 10.244.0.1:55878: EOF 2018/07/19 09:16:04 http: TLS handshake error from 10.244.0.1:55910: EOF level=info timestamp=2018-07-19T09:16:11.508859Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:16:11.511709Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:16:14 http: TLS handshake error from 10.244.0.1:55940: EOF level=info timestamp=2018-07-19T09:16:17.814110Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:16:24 http: TLS handshake error from 10.244.0.1:55970: EOF 2018/07/19 09:16:34 http: TLS handshake error from 10.244.0.1:56004: EOF Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T08:55:22.807082Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-07-19T08:55:22.809478Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-07-19T08:55:22.809664Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-19T08:55:22.809793Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-19T09:14:12.966871Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:12.976025Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.414119Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:58.417633Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.798147Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminp4cm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminp4cm" level=info timestamp=2018-07-19T09:15:43.875545Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:15:43.886661Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:15:44.256149Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirvxtn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirvxtn" level=info timestamp=2018-07-19T09:16:29.463563Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:16:29.466972Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:16:29.674681Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibp8cz\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibp8cz" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T08:24:14.234557Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-19T08:24:14.248689Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.477227Z pos=vm.go:342 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-19T08:24:14.479164Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-19T08:24:14.482440Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="grace period expired, killing deleted VirtualMachineInstance testvmifgfbl" level=info timestamp=2018-07-19T08:24:14.705847Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T08:24:14.707117Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T08:24:14.705824Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind= uid=140cb140-8b2d-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.710692Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:14.711637Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:14.729092Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T08:24:15.203045Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:15.204488Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T08:24:15.205931Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T08:24:15.206134Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmifgfbl kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmibp8cz-r6z4l Pod phase: Running level=info timestamp=2018-07-19T09:16:34.269700Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:16:34.269925Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:16:34.272594Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmirvxtn-zc9kw Pod phase: Pending level=info timestamp=2018-07-19T09:15:48.412915Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:15:48.413114Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:15:48.414824Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-19T09:15:58.422519Z pos=libvirt.go:271 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-19T09:15:58.479098Z pos=virt-launcher.go:143 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmirvxtn" level=info timestamp=2018-07-19T09:15:58.481097Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-19T09:15:58.481660Z pos=virt-launcher.go:60 component=virt-launcher msg="Marked as ready" caught signal virt-launcher exited with code 127 Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.577 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach [BeforeEach] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the internet /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc42085f290>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:14:59 http: TLS handshake error from 10.244.0.1:53904: EOF 2018/07/19 09:15:09 http: TLS handshake error from 10.244.0.1:53940: EOF 2018/07/19 09:15:19 http: TLS handshake error from 10.244.0.1:53972: EOF 2018/07/19 09:15:29 http: TLS handshake error from 10.244.0.1:54002: EOF 2018/07/19 09:15:39 http: TLS handshake error from 10.244.0.1:54032: EOF 2018/07/19 09:15:49 http: TLS handshake error from 10.244.0.1:54066: EOF 2018/07/19 09:15:59 http: TLS handshake error from 10.244.0.1:54102: EOF 2018/07/19 09:16:09 http: TLS handshake error from 10.244.0.1:54132: EOF 2018/07/19 09:16:19 http: TLS handshake error from 10.244.0.1:54162: EOF 2018/07/19 09:16:29 http: TLS handshake error from 10.244.0.1:54192: EOF 2018/07/19 09:16:39 http: TLS handshake error from 10.244.0.1:54230: EOF 2018/07/19 09:16:49 http: TLS handshake error from 10.244.0.1:54262: EOF 2018/07/19 09:16:59 http: TLS handshake error from 10.244.0.1:54292: EOF 2018/07/19 09:17:09 http: TLS handshake error from 10.244.0.1:54322: EOF 2018/07/19 09:17:19 http: TLS handshake error from 10.244.0.1:54356: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:16:11.511709Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:16:14 http: TLS handshake error from 10.244.0.1:55940: EOF level=info timestamp=2018-07-19T09:16:17.814110Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:16:24 http: TLS handshake error from 10.244.0.1:55970: EOF 2018/07/19 09:16:34 http: TLS handshake error from 10.244.0.1:56004: EOF level=info timestamp=2018-07-19T09:16:41.667229Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:16:41.688387Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:16:44 http: TLS handshake error from 10.244.0.1:56040: EOF level=info timestamp=2018-07-19T09:16:47.850495Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:16:54 http: TLS handshake error from 10.244.0.1:56070: EOF 2018/07/19 09:17:04 http: TLS handshake error from 10.244.0.1:56100: EOF level=info timestamp=2018-07-19T09:17:11.844477Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:17:11.852144Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:17:14 http: TLS handshake error from 10.244.0.1:56130: EOF level=info timestamp=2018-07-19T09:17:17.962639Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T08:55:22.809664Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-07-19T08:55:22.809793Z pos=preset.go:71 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-07-19T09:14:12.966871Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:12.976025Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.414119Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:58.417633Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.798147Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminp4cm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminp4cm" level=info timestamp=2018-07-19T09:15:43.875545Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:15:43.886661Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:15:44.256149Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirvxtn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirvxtn" level=info timestamp=2018-07-19T09:16:29.463563Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:16:29.466972Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:16:29.674681Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibp8cz\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibp8cz" level=info timestamp=2018-07-19T09:17:14.938104Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:17:14.949664Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:16:46.132141Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:16:46.133385Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=Domain uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-19T09:16:46.158782Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:16:46.163802Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.164216Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:16:46.165500Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:16:46.168022Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmibp8cz" level=info timestamp=2018-07-19T09:16:46.379645Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.380795Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:16:46.381043Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:16:46.381214Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:16:46.381448Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.387027Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:16:46.387598Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:16:46.388241Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmibp8cz-r6z4l Pod phase: Succeeded Unable to retrieve container logs for docker://e04ec121b0b04020ff695c80b52f8f1a6e6dfadca0b0bf55e32525c17e893236 Pod name: virt-launcher-testvmif8p7s-6hfmh Pod phase: Running level=info timestamp=2018-07-19T09:17:20.143512Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:17:20.144092Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:17:20.146414Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [44.695 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod [BeforeEach] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc4206a42d0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:15:39 http: TLS handshake error from 10.244.0.1:54032: EOF 2018/07/19 09:15:49 http: TLS handshake error from 10.244.0.1:54066: EOF 2018/07/19 09:15:59 http: TLS handshake error from 10.244.0.1:54102: EOF 2018/07/19 09:16:09 http: TLS handshake error from 10.244.0.1:54132: EOF 2018/07/19 09:16:19 http: TLS handshake error from 10.244.0.1:54162: EOF 2018/07/19 09:16:29 http: TLS handshake error from 10.244.0.1:54192: EOF 2018/07/19 09:16:39 http: TLS handshake error from 10.244.0.1:54230: EOF 2018/07/19 09:16:49 http: TLS handshake error from 10.244.0.1:54262: EOF 2018/07/19 09:16:59 http: TLS handshake error from 10.244.0.1:54292: EOF 2018/07/19 09:17:09 http: TLS handshake error from 10.244.0.1:54322: EOF 2018/07/19 09:17:19 http: TLS handshake error from 10.244.0.1:54356: EOF 2018/07/19 09:17:29 http: TLS handshake error from 10.244.0.1:54392: EOF 2018/07/19 09:17:39 http: TLS handshake error from 10.244.0.1:54422: EOF 2018/07/19 09:17:49 http: TLS handshake error from 10.244.0.1:54452: EOF 2018/07/19 09:17:59 http: TLS handshake error from 10.244.0.1:54482: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running 2018/07/19 09:17:04 http: TLS handshake error from 10.244.0.1:56100: EOF level=info timestamp=2018-07-19T09:17:11.844477Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:17:11.852144Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:17:14 http: TLS handshake error from 10.244.0.1:56130: EOF level=info timestamp=2018-07-19T09:17:17.962639Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:17:24 http: TLS handshake error from 10.244.0.1:56168: EOF level=info timestamp=2018-07-19T09:17:25.644982Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:17:25.646568Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:17:34 http: TLS handshake error from 10.244.0.1:56200: EOF level=info timestamp=2018-07-19T09:17:42.048721Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:17:42.048780Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:17:44 http: TLS handshake error from 10.244.0.1:56230: EOF level=info timestamp=2018-07-19T09:17:47.908498Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:17:54 http: TLS handshake error from 10.244.0.1:56260: EOF 2018/07/19 09:18:04 http: TLS handshake error from 10.244.0.1:56294: EOF Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:14:12.966871Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:12.976025Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis6jxk kind= uid=085837b7-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.414119Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:14:58.417633Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.798147Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminp4cm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminp4cm" level=info timestamp=2018-07-19T09:15:43.875545Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:15:43.886661Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:15:44.256149Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirvxtn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirvxtn" level=info timestamp=2018-07-19T09:16:29.463563Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:16:29.466972Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:16:29.674681Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibp8cz\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibp8cz" level=info timestamp=2018-07-19T09:17:14.938104Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:17:14.949664Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:17:59.816380Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi995hz kind= uid=8f82f455-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:17:59.822747Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi995hz kind= uid=8f82f455-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:16:46.132141Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:16:46.133385Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=Domain uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-19T09:16:46.158782Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:16:46.163802Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.164216Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:16:46.165500Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:16:46.168022Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmibp8cz" level=info timestamp=2018-07-19T09:16:46.379645Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.380795Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:16:46.381043Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:16:46.381214Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:16:46.381448Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.387027Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:16:46.387598Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:16:46.388241Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmi995hz-5z6k5 Pod phase: Running level=info timestamp=2018-07-19T09:18:03.729986Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:18:03.730225Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:18:03.732184Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.560 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod [BeforeEach] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc420d7c120>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:16:29 http: TLS handshake error from 10.244.0.1:54192: EOF 2018/07/19 09:16:39 http: TLS handshake error from 10.244.0.1:54230: EOF 2018/07/19 09:16:49 http: TLS handshake error from 10.244.0.1:54262: EOF 2018/07/19 09:16:59 http: TLS handshake error from 10.244.0.1:54292: EOF 2018/07/19 09:17:09 http: TLS handshake error from 10.244.0.1:54322: EOF 2018/07/19 09:17:19 http: TLS handshake error from 10.244.0.1:54356: EOF 2018/07/19 09:17:29 http: TLS handshake error from 10.244.0.1:54392: EOF 2018/07/19 09:17:39 http: TLS handshake error from 10.244.0.1:54422: EOF 2018/07/19 09:17:49 http: TLS handshake error from 10.244.0.1:54452: EOF 2018/07/19 09:17:59 http: TLS handshake error from 10.244.0.1:54482: EOF 2018/07/19 09:18:09 http: TLS handshake error from 10.244.0.1:54518: EOF 2018/07/19 09:18:19 http: TLS handshake error from 10.244.0.1:54552: EOF 2018/07/19 09:18:29 http: TLS handshake error from 10.244.0.1:54582: EOF 2018/07/19 09:18:39 http: TLS handshake error from 10.244.0.1:54612: EOF 2018/07/19 09:18:49 http: TLS handshake error from 10.244.0.1:54646: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:17:47.908498Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:17:54 http: TLS handshake error from 10.244.0.1:56260: EOF 2018/07/19 09:18:04 http: TLS handshake error from 10.244.0.1:56294: EOF level=info timestamp=2018-07-19T09:18:12.196730Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:18:12.271116Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:18:14 http: TLS handshake error from 10.244.0.1:56330: EOF level=info timestamp=2018-07-19T09:18:17.881589Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:18:24 http: TLS handshake error from 10.244.0.1:56360: EOF level=info timestamp=2018-07-19T09:18:25.649725Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:18:25.652886Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:18:34 http: TLS handshake error from 10.244.0.1:56390: EOF level=info timestamp=2018-07-19T09:18:42.376153Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:18:42.438580Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:18:44 http: TLS handshake error from 10.244.0.1:56420: EOF level=info timestamp=2018-07-19T09:18:47.825531Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:14:58.417633Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvminp4cm kind= uid=236d643f-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:14:58.798147Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminp4cm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminp4cm" level=info timestamp=2018-07-19T09:15:43.875545Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:15:43.886661Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:15:44.256149Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirvxtn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirvxtn" level=info timestamp=2018-07-19T09:16:29.463563Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:16:29.466972Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:16:29.674681Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibp8cz\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibp8cz" level=info timestamp=2018-07-19T09:17:14.938104Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:17:14.949664Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:17:59.816380Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi995hz kind= uid=8f82f455-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:17:59.822747Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi995hz kind= uid=8f82f455-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:18:15.378525Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi995hz\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi995hz, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8f82f455-8b34-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi995hz" level=info timestamp=2018-07-19T09:18:45.597713Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:18:45.609024Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:16:46.132141Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:16:46.133385Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=Domain uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-19T09:16:46.158782Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:16:46.163802Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.164216Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:16:46.165500Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:16:46.168022Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmibp8cz" level=info timestamp=2018-07-19T09:16:46.379645Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.380795Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:16:46.381043Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:16:46.381214Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:16:46.381448Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.387027Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:16:46.387598Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:16:46.388241Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmiv8462-zpf7k Pod phase: Running level=info timestamp=2018-07-19T09:18:49.990651Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:18:49.990926Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:18:49.993629Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.736 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod [BeforeEach] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Node /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc420e48120>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:17:09 http: TLS handshake error from 10.244.0.1:54322: EOF 2018/07/19 09:17:19 http: TLS handshake error from 10.244.0.1:54356: EOF 2018/07/19 09:17:29 http: TLS handshake error from 10.244.0.1:54392: EOF 2018/07/19 09:17:39 http: TLS handshake error from 10.244.0.1:54422: EOF 2018/07/19 09:17:49 http: TLS handshake error from 10.244.0.1:54452: EOF 2018/07/19 09:17:59 http: TLS handshake error from 10.244.0.1:54482: EOF 2018/07/19 09:18:09 http: TLS handshake error from 10.244.0.1:54518: EOF 2018/07/19 09:18:19 http: TLS handshake error from 10.244.0.1:54552: EOF 2018/07/19 09:18:29 http: TLS handshake error from 10.244.0.1:54582: EOF 2018/07/19 09:18:39 http: TLS handshake error from 10.244.0.1:54612: EOF 2018/07/19 09:18:49 http: TLS handshake error from 10.244.0.1:54646: EOF 2018/07/19 09:18:59 http: TLS handshake error from 10.244.0.1:54682: EOF 2018/07/19 09:19:09 http: TLS handshake error from 10.244.0.1:54712: EOF 2018/07/19 09:19:19 http: TLS handshake error from 10.244.0.1:54742: EOF 2018/07/19 09:19:29 http: TLS handshake error from 10.244.0.1:54772: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:18:25.649725Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:18:25.652886Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:18:34 http: TLS handshake error from 10.244.0.1:56390: EOF level=info timestamp=2018-07-19T09:18:42.376153Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:18:42.438580Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:18:44 http: TLS handshake error from 10.244.0.1:56420: EOF level=info timestamp=2018-07-19T09:18:47.825531Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:18:54 http: TLS handshake error from 10.244.0.1:56456: EOF 2018/07/19 09:19:04 http: TLS handshake error from 10.244.0.1:56490: EOF level=info timestamp=2018-07-19T09:19:12.525517Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:19:12.597909Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:19:14 http: TLS handshake error from 10.244.0.1:56520: EOF level=info timestamp=2018-07-19T09:19:17.881180Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:19:24 http: TLS handshake error from 10.244.0.1:56550: EOF 2018/07/19 09:19:34 http: TLS handshake error from 10.244.0.1:56584: EOF Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:15:43.886661Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirvxtn kind= uid=3e86ca4d-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:15:44.256149Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmirvxtn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmirvxtn" level=info timestamp=2018-07-19T09:16:29.463563Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:16:29.466972Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:16:29.674681Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibp8cz\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibp8cz" level=info timestamp=2018-07-19T09:17:14.938104Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:17:14.949664Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:17:59.816380Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi995hz kind= uid=8f82f455-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:17:59.822747Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi995hz kind= uid=8f82f455-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:18:15.378525Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi995hz\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi995hz, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8f82f455-8b34-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi995hz" level=info timestamp=2018-07-19T09:18:45.597713Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:18:45.609024Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:19:31.003467Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8ndcq kind= uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:19:31.011001Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8ndcq kind= uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:19:31.373198Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi8ndcq\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi8ndcq" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:16:46.379645Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.380795Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:16:46.381043Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:16:46.381214Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:16:46.381448Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:16:46.387027Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:16:46.387598Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:16:46.388241Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibp8cz kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:19:00.821002Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=error timestamp=2018-07-19T09:19:00.899249Z pos=vm.go:404 component=virt-handler namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiv8462\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiv8462, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: aac9b614-8b34-11e8-9619-525500d15501, UID in object meta: " msg="Updating the VirtualMachineInstance status failed." level=info timestamp=2018-07-19T09:19:00.899450Z pos=vm.go:251 component=virt-handler reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiv8462\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiv8462, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: aac9b614-8b34-11e8-9619-525500d15501, UID in object meta: " msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmiv8462" level=info timestamp=2018-07-19T09:19:00.899658Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmiv8462 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:19:00.899744Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiv8462 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:19:00.906176Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmiv8462 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:19:00.906620Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiv8462 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmi8ndcq-xg4j4 Pod phase: Running level=info timestamp=2018-07-19T09:19:34.990565Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:19:34.992077Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:19:34.995186Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiv8462-zpf7k Pod phase: Pending level=info timestamp=2018-07-19T09:18:49.990651Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:18:49.990926Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:18:49.993629Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-19T09:19:00.003655Z pos=libvirt.go:271 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-19T09:19:00.062590Z pos=virt-launcher.go:143 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmiv8462" level=info timestamp=2018-07-19T09:19:00.064990Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-19T09:19:00.065495Z pos=virt-launcher.go:60 component=virt-launcher msg="Marked as ready" caught signal virt-launcher exited with code 127 Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.460 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod [BeforeEach] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Node /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*errors.StatusError | 0xc420e48ab0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:17:59 http: TLS handshake error from 10.244.0.1:54482: EOF 2018/07/19 09:18:09 http: TLS handshake error from 10.244.0.1:54518: EOF 2018/07/19 09:18:19 http: TLS handshake error from 10.244.0.1:54552: EOF 2018/07/19 09:18:29 http: TLS handshake error from 10.244.0.1:54582: EOF 2018/07/19 09:18:39 http: TLS handshake error from 10.244.0.1:54612: EOF 2018/07/19 09:18:49 http: TLS handshake error from 10.244.0.1:54646: EOF 2018/07/19 09:18:59 http: TLS handshake error from 10.244.0.1:54682: EOF 2018/07/19 09:19:09 http: TLS handshake error from 10.244.0.1:54712: EOF 2018/07/19 09:19:19 http: TLS handshake error from 10.244.0.1:54742: EOF 2018/07/19 09:19:29 http: TLS handshake error from 10.244.0.1:54772: EOF 2018/07/19 09:19:39 http: TLS handshake error from 10.244.0.1:54808: EOF 2018/07/19 09:19:49 http: TLS handshake error from 10.244.0.1:54842: EOF 2018/07/19 09:19:59 http: TLS handshake error from 10.244.0.1:54872: EOF 2018/07/19 09:20:09 http: TLS handshake error from 10.244.0.1:54902: EOF 2018/07/19 09:20:19 http: TLS handshake error from 10.244.0.1:54936: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:19:12.597909Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:19:14 http: TLS handshake error from 10.244.0.1:56520: EOF level=info timestamp=2018-07-19T09:19:17.881180Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:19:24 http: TLS handshake error from 10.244.0.1:56550: EOF 2018/07/19 09:19:34 http: TLS handshake error from 10.244.0.1:56584: EOF level=info timestamp=2018-07-19T09:19:42.632207Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:19:42.698955Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:19:44 http: TLS handshake error from 10.244.0.1:56620: EOF level=info timestamp=2018-07-19T09:19:47.867035Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:19:54 http: TLS handshake error from 10.244.0.1:56650: EOF 2018/07/19 09:20:04 http: TLS handshake error from 10.244.0.1:56680: EOF level=info timestamp=2018-07-19T09:20:12.780539Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:20:12.867730Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:20:14 http: TLS handshake error from 10.244.0.1:56710: EOF level=info timestamp=2018-07-19T09:20:17.817032Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:16:29.463563Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:16:29.466972Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibp8cz kind= uid=59b3dc69-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:16:29.674681Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibp8cz\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibp8cz" level=info timestamp=2018-07-19T09:17:14.938104Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:17:14.949664Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:17:59.816380Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi995hz kind= uid=8f82f455-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:17:59.822747Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi995hz kind= uid=8f82f455-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:18:15.378525Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi995hz\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi995hz, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8f82f455-8b34-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi995hz" level=info timestamp=2018-07-19T09:18:45.597713Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:18:45.609024Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:19:31.003467Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8ndcq kind= uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:19:31.011001Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8ndcq kind= uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:19:31.373198Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi8ndcq\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi8ndcq" level=info timestamp=2018-07-19T09:20:16.438850Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6krm7 kind= uid=e0f7c381-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:20:16.446063Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6krm7 kind= uid=e0f7c381-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:19:47.629663Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind= uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:19:47.629978Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:19:47.630134Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:19:47.630707Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmi8ndcq" level=info timestamp=2018-07-19T09:19:47.884142Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:19:47.886741Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=Domain uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Domain is in state Shutoff reason Destroyed" level=info timestamp=2018-07-19T09:19:47.900636Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:19:47.901143Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:19:47.901202Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:19:47.903260Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:19:47.909969Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:19:47.910189Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:19:47.910601Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:19:47.910873Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:19:47.922892Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" Pod name: virt-launcher-testvmi6krm7-cw9k8 Pod phase: Running level=info timestamp=2018-07-19T09:20:20.456600Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:20:20.456792Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:20:20.458899Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmi8ndcq-xg4j4 Pod phase: Succeeded Unable to retrieve container logs for docker://0045c6c8ec170e7bd2c1ad524b822b0a6b799f9747012d1321b96843ed84c95d Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [43.461 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 with a service matching the vmi exposed [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:283 should be able to reach the vmi based on labels specified on the vmi /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:303 Expected error: <*errors.StatusError | 0xc4206a4510>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:18:39 http: TLS handshake error from 10.244.0.1:54612: EOF 2018/07/19 09:18:49 http: TLS handshake error from 10.244.0.1:54646: EOF 2018/07/19 09:18:59 http: TLS handshake error from 10.244.0.1:54682: EOF 2018/07/19 09:19:09 http: TLS handshake error from 10.244.0.1:54712: EOF 2018/07/19 09:19:19 http: TLS handshake error from 10.244.0.1:54742: EOF 2018/07/19 09:19:29 http: TLS handshake error from 10.244.0.1:54772: EOF 2018/07/19 09:19:39 http: TLS handshake error from 10.244.0.1:54808: EOF 2018/07/19 09:19:49 http: TLS handshake error from 10.244.0.1:54842: EOF 2018/07/19 09:19:59 http: TLS handshake error from 10.244.0.1:54872: EOF 2018/07/19 09:20:09 http: TLS handshake error from 10.244.0.1:54902: EOF 2018/07/19 09:20:19 http: TLS handshake error from 10.244.0.1:54936: EOF 2018/07/19 09:20:29 http: TLS handshake error from 10.244.0.1:54972: EOF 2018/07/19 09:20:39 http: TLS handshake error from 10.244.0.1:55002: EOF 2018/07/19 09:20:49 http: TLS handshake error from 10.244.0.1:55032: EOF 2018/07/19 09:20:59 http: TLS handshake error from 10.244.0.1:55062: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running 2018/07/19 09:20:04 http: TLS handshake error from 10.244.0.1:56680: EOF level=info timestamp=2018-07-19T09:20:12.780539Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:20:12.867730Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:20:14 http: TLS handshake error from 10.244.0.1:56710: EOF level=info timestamp=2018-07-19T09:20:17.817032Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:20:24 http: TLS handshake error from 10.244.0.1:56748: EOF level=info timestamp=2018-07-19T09:20:25.656258Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:20:25.659654Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:20:34 http: TLS handshake error from 10.244.0.1:56780: EOF level=info timestamp=2018-07-19T09:20:42.943518Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:20:43.015896Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:20:44 http: TLS handshake error from 10.244.0.1:56810: EOF level=info timestamp=2018-07-19T09:20:47.960952Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:20:54 http: TLS handshake error from 10.244.0.1:56840: EOF 2018/07/19 09:21:04 http: TLS handshake error from 10.244.0.1:56874: EOF Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:17:14.949664Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmif8p7s kind= uid=74cd570e-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:17:59.816380Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi995hz kind= uid=8f82f455-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:17:59.822747Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi995hz kind= uid=8f82f455-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:18:15.378525Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi995hz\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi995hz, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8f82f455-8b34-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi995hz" level=info timestamp=2018-07-19T09:18:45.597713Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:18:45.609024Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:19:31.003467Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8ndcq kind= uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:19:31.011001Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8ndcq kind= uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:19:31.373198Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi8ndcq\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi8ndcq" level=info timestamp=2018-07-19T09:20:16.438850Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6krm7 kind= uid=e0f7c381-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:20:16.446063Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6krm7 kind= uid=e0f7c381-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:20:29.744986Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6krm7\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi6krm7, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e0f7c381-8b34-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6krm7" level=info timestamp=2018-07-19T09:20:59.918188Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwfrbm kind= uid=fae710bb-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:20:59.936933Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwfrbm kind= uid=fae710bb-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:21:00.128000Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwfrbm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwfrbm" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:19:47.629663Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind= uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:19:47.629978Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:19:47.630134Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:19:47.630707Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmi8ndcq" level=info timestamp=2018-07-19T09:19:47.884142Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:19:47.886741Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=Domain uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Domain is in state Shutoff reason Destroyed" level=info timestamp=2018-07-19T09:19:47.900636Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:19:47.901143Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:19:47.901202Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:19:47.903260Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:19:47.909969Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:19:47.910189Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:19:47.910601Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:19:47.910873Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8ndcq kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:19:47.922892Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" Pod name: virt-launcher-testvmiwfrbm-xcmj8 Pod phase: Running level=info timestamp=2018-07-19T09:21:04.180512Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:21:04.180773Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:21:04.182838Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.438 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 with a service matching the vmi exposed [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:283 should fail to reach the vmi if an invalid servicename is used /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:314 Expected error: <*errors.StatusError | 0xc420140d80>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:19:29 http: TLS handshake error from 10.244.0.1:54772: EOF 2018/07/19 09:19:39 http: TLS handshake error from 10.244.0.1:54808: EOF 2018/07/19 09:19:49 http: TLS handshake error from 10.244.0.1:54842: EOF 2018/07/19 09:19:59 http: TLS handshake error from 10.244.0.1:54872: EOF 2018/07/19 09:20:09 http: TLS handshake error from 10.244.0.1:54902: EOF 2018/07/19 09:20:19 http: TLS handshake error from 10.244.0.1:54936: EOF 2018/07/19 09:20:29 http: TLS handshake error from 10.244.0.1:54972: EOF 2018/07/19 09:20:39 http: TLS handshake error from 10.244.0.1:55002: EOF 2018/07/19 09:20:49 http: TLS handshake error from 10.244.0.1:55032: EOF 2018/07/19 09:20:59 http: TLS handshake error from 10.244.0.1:55062: EOF 2018/07/19 09:21:09 http: TLS handshake error from 10.244.0.1:55098: EOF 2018/07/19 09:21:19 http: TLS handshake error from 10.244.0.1:55132: EOF 2018/07/19 09:21:29 http: TLS handshake error from 10.244.0.1:55162: EOF 2018/07/19 09:21:39 http: TLS handshake error from 10.244.0.1:55192: EOF 2018/07/19 09:21:49 http: TLS handshake error from 10.244.0.1:55226: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:20:43.015896Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:20:44 http: TLS handshake error from 10.244.0.1:56810: EOF level=info timestamp=2018-07-19T09:20:47.960952Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:20:54 http: TLS handshake error from 10.244.0.1:56840: EOF 2018/07/19 09:21:04 http: TLS handshake error from 10.244.0.1:56874: EOF level=info timestamp=2018-07-19T09:21:13.051748Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:21:13.130109Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:21:14 http: TLS handshake error from 10.244.0.1:56910: EOF level=info timestamp=2018-07-19T09:21:17.857659Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:21:24 http: TLS handshake error from 10.244.0.1:56940: EOF 2018/07/19 09:21:34 http: TLS handshake error from 10.244.0.1:56970: EOF level=info timestamp=2018-07-19T09:21:43.209899Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:21:43.302089Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:21:44 http: TLS handshake error from 10.244.0.1:57000: EOF level=info timestamp=2018-07-19T09:21:47.844014Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:18:15.378525Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi995hz\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi995hz, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8f82f455-8b34-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi995hz" level=info timestamp=2018-07-19T09:18:45.597713Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:18:45.609024Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv8462 kind= uid=aac9b614-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:19:31.003467Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8ndcq kind= uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:19:31.011001Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8ndcq kind= uid=c5e54e97-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:19:31.373198Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi8ndcq\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi8ndcq" level=info timestamp=2018-07-19T09:20:16.438850Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6krm7 kind= uid=e0f7c381-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:20:16.446063Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6krm7 kind= uid=e0f7c381-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:20:29.744986Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6krm7\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi6krm7, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e0f7c381-8b34-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6krm7" level=info timestamp=2018-07-19T09:20:59.918188Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwfrbm kind= uid=fae710bb-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:20:59.936933Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwfrbm kind= uid=fae710bb-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:21:00.128000Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwfrbm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwfrbm" level=info timestamp=2018-07-19T09:21:45.407151Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi57w8c kind= uid=16024d95-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:21:45.416711Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi57w8c kind= uid=16024d95-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:21:45.701094Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi57w8c\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi57w8c" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:21:16.666792Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:21:16.667927Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmiwfrbm" level=info timestamp=2018-07-19T09:21:16.671632Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:21:16.877655Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.877817Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:21:16.877853Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:21:16.878168Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.882997Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:21:16.883393Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:21:16.883716Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.901260Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:21:16.905698Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:21:16.905849Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:21:16.906028Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.944557Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" Pod name: virt-launcher-testvmi57w8c-j5qtj Pod phase: Running level=info timestamp=2018-07-19T09:21:49.958656Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:21:49.959017Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:21:49.961791Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwfrbm-xcmj8 Pod phase: Succeeded Unable to retrieve container logs for docker://e8b3c740764bac758d80e24c9ef83d64cf788754295c910bce353e31fbcbd72a Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.534 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 with a subdomain and a headless service given [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:330 should be able to reach the vmi via its unique fully qualified domain name /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:353 Expected error: <*errors.StatusError | 0xc4206a5560>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:20:09 http: TLS handshake error from 10.244.0.1:54902: EOF 2018/07/19 09:20:19 http: TLS handshake error from 10.244.0.1:54936: EOF 2018/07/19 09:20:29 http: TLS handshake error from 10.244.0.1:54972: EOF 2018/07/19 09:20:39 http: TLS handshake error from 10.244.0.1:55002: EOF 2018/07/19 09:20:49 http: TLS handshake error from 10.244.0.1:55032: EOF 2018/07/19 09:20:59 http: TLS handshake error from 10.244.0.1:55062: EOF 2018/07/19 09:21:09 http: TLS handshake error from 10.244.0.1:55098: EOF 2018/07/19 09:21:19 http: TLS handshake error from 10.244.0.1:55132: EOF 2018/07/19 09:21:29 http: TLS handshake error from 10.244.0.1:55162: EOF 2018/07/19 09:21:39 http: TLS handshake error from 10.244.0.1:55192: EOF 2018/07/19 09:21:49 http: TLS handshake error from 10.244.0.1:55226: EOF 2018/07/19 09:21:59 http: TLS handshake error from 10.244.0.1:55262: EOF 2018/07/19 09:22:09 http: TLS handshake error from 10.244.0.1:55292: EOF 2018/07/19 09:22:19 http: TLS handshake error from 10.244.0.1:55322: EOF 2018/07/19 09:22:29 http: TLS handshake error from 10.244.0.1:55352: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:21:43.209899Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:21:43.302089Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:21:44 http: TLS handshake error from 10.244.0.1:57000: EOF level=info timestamp=2018-07-19T09:21:47.844014Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:21:54 http: TLS handshake error from 10.244.0.1:57036: EOF 2018/07/19 09:22:04 http: TLS handshake error from 10.244.0.1:57070: EOF level=info timestamp=2018-07-19T09:22:13.515695Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:22:13.515167Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:22:14 http: TLS handshake error from 10.244.0.1:57100: EOF level=info timestamp=2018-07-19T09:22:17.297678Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:22:17.708569Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:22:18.047977Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:22:24 http: TLS handshake error from 10.244.0.1:57130: EOF level=info timestamp=2018-07-19T09:22:25.685022Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:22:25.690954Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:19:31.373198Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi8ndcq\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi8ndcq" level=info timestamp=2018-07-19T09:20:16.438850Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6krm7 kind= uid=e0f7c381-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:20:16.446063Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6krm7 kind= uid=e0f7c381-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:20:29.744986Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6krm7\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi6krm7, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e0f7c381-8b34-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6krm7" level=info timestamp=2018-07-19T09:20:59.918188Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwfrbm kind= uid=fae710bb-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:20:59.936933Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwfrbm kind= uid=fae710bb-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:21:00.128000Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwfrbm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwfrbm" level=info timestamp=2018-07-19T09:21:45.407151Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi57w8c kind= uid=16024d95-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:21:45.416711Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi57w8c kind= uid=16024d95-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:21:45.701094Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi57w8c\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi57w8c" level=info timestamp=2018-07-19T09:22:00.754837Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi57w8c\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi57w8c, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 16024d95-8b35-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi57w8c" level=info timestamp=2018-07-19T09:22:30.926704Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi495xn kind= uid=31268d25-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:22:30.929719Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi495xn kind= uid=31268d25-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:22:31.156403Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:22:31.196529Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:21:16.666792Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:21:16.667927Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmiwfrbm" level=info timestamp=2018-07-19T09:21:16.671632Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:21:16.877655Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.877817Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:21:16.877853Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:21:16.878168Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.882997Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:21:16.883393Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:21:16.883716Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.901260Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:21:16.905698Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:21:16.905849Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:21:16.906028Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.944557Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" Pod name: virt-launcher-testvmi495xn-jbmb2 Pod phase: Running level=info timestamp=2018-07-19T09:22:35.243392Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:22:35.243640Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:22:35.245796Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmi57w8c-j5qtj Pod phase: Failed level=info timestamp=2018-07-19T09:21:49.958656Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:21:49.959017Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:21:49.961791Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-19T09:21:59.975244Z pos=libvirt.go:271 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-19T09:22:00.035629Z pos=virt-launcher.go:143 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmi57w8c" level=info timestamp=2018-07-19T09:22:00.037946Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-19T09:22:00.038541Z pos=virt-launcher.go:60 component=virt-launcher msg="Marked as ready" caught signal virt-launcher exited with code 127 Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [42.639 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:379 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:380 Expected error: <*errors.StatusError | 0xc42085ed80>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:20:59 http: TLS handshake error from 10.244.0.1:55062: EOF 2018/07/19 09:21:09 http: TLS handshake error from 10.244.0.1:55098: EOF 2018/07/19 09:21:19 http: TLS handshake error from 10.244.0.1:55132: EOF 2018/07/19 09:21:29 http: TLS handshake error from 10.244.0.1:55162: EOF 2018/07/19 09:21:39 http: TLS handshake error from 10.244.0.1:55192: EOF 2018/07/19 09:21:49 http: TLS handshake error from 10.244.0.1:55226: EOF 2018/07/19 09:21:59 http: TLS handshake error from 10.244.0.1:55262: EOF 2018/07/19 09:22:09 http: TLS handshake error from 10.244.0.1:55292: EOF 2018/07/19 09:22:19 http: TLS handshake error from 10.244.0.1:55322: EOF 2018/07/19 09:22:29 http: TLS handshake error from 10.244.0.1:55352: EOF 2018/07/19 09:22:39 http: TLS handshake error from 10.244.0.1:55390: EOF 2018/07/19 09:22:49 http: TLS handshake error from 10.244.0.1:55422: EOF 2018/07/19 09:22:59 http: TLS handshake error from 10.244.0.1:55452: EOF 2018/07/19 09:23:09 http: TLS handshake error from 10.244.0.1:55482: EOF 2018/07/19 09:23:19 http: TLS handshake error from 10.244.0.1:55516: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:22:18.047977Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:22:24 http: TLS handshake error from 10.244.0.1:57130: EOF level=info timestamp=2018-07-19T09:22:25.685022Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:22:25.690954Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:22:34 http: TLS handshake error from 10.244.0.1:57166: EOF level=info timestamp=2018-07-19T09:22:43.666872Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:22:43.668813Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:22:44 http: TLS handshake error from 10.244.0.1:57200: EOF level=info timestamp=2018-07-19T09:22:47.918240Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:22:54 http: TLS handshake error from 10.244.0.1:57230: EOF 2018/07/19 09:23:04 http: TLS handshake error from 10.244.0.1:57260: EOF level=info timestamp=2018-07-19T09:23:14.040777Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:23:14.039042Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:23:14 http: TLS handshake error from 10.244.0.1:57292: EOF level=info timestamp=2018-07-19T09:23:17.909748Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:20:59.918188Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwfrbm kind= uid=fae710bb-8b34-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:20:59.936933Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwfrbm kind= uid=fae710bb-8b34-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:21:00.128000Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwfrbm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwfrbm" level=info timestamp=2018-07-19T09:21:45.407151Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi57w8c kind= uid=16024d95-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:21:45.416711Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi57w8c kind= uid=16024d95-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:21:45.701094Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi57w8c\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi57w8c" level=info timestamp=2018-07-19T09:22:00.754837Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi57w8c\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi57w8c, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 16024d95-8b35-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi57w8c" level=info timestamp=2018-07-19T09:22:30.926704Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi495xn kind= uid=31268d25-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:22:30.929719Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi495xn kind= uid=31268d25-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:22:31.156403Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:22:31.196529Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:22:43.402095Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi495xn, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 31268d25-8b35-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:23:13.623483Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:23:13.625420Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:23:14.004006Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi47z4s\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi47z4s" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:21:16.666792Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:21:16.667927Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmiwfrbm" level=info timestamp=2018-07-19T09:21:16.671632Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:21:16.877655Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.877817Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:21:16.877853Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:21:16.878168Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.882997Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:21:16.883393Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:21:16.883716Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.901260Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:21:16.905698Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:21:16.905849Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:21:16.906028Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiwfrbm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:21:16.944557Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" Pod name: virt-launcher-testvmi47z4s-6nd6r Pod phase: Running level=info timestamp=2018-07-19T09:23:17.356675Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:23:17.356962Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:23:17.359536Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.090 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with default interface model [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:393 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:394 Expected error: <*errors.StatusError | 0xc420141290>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:21:39 http: TLS handshake error from 10.244.0.1:55192: EOF 2018/07/19 09:21:49 http: TLS handshake error from 10.244.0.1:55226: EOF 2018/07/19 09:21:59 http: TLS handshake error from 10.244.0.1:55262: EOF 2018/07/19 09:22:09 http: TLS handshake error from 10.244.0.1:55292: EOF 2018/07/19 09:22:19 http: TLS handshake error from 10.244.0.1:55322: EOF 2018/07/19 09:22:29 http: TLS handshake error from 10.244.0.1:55352: EOF 2018/07/19 09:22:39 http: TLS handshake error from 10.244.0.1:55390: EOF 2018/07/19 09:22:49 http: TLS handshake error from 10.244.0.1:55422: EOF 2018/07/19 09:22:59 http: TLS handshake error from 10.244.0.1:55452: EOF 2018/07/19 09:23:09 http: TLS handshake error from 10.244.0.1:55482: EOF 2018/07/19 09:23:19 http: TLS handshake error from 10.244.0.1:55516: EOF 2018/07/19 09:23:29 http: TLS handshake error from 10.244.0.1:55552: EOF 2018/07/19 09:23:39 http: TLS handshake error from 10.244.0.1:55582: EOF 2018/07/19 09:23:49 http: TLS handshake error from 10.244.0.1:55612: EOF 2018/07/19 09:23:59 http: TLS handshake error from 10.244.0.1:55644: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running 2018/07/19 09:22:54 http: TLS handshake error from 10.244.0.1:57230: EOF 2018/07/19 09:23:04 http: TLS handshake error from 10.244.0.1:57260: EOF level=info timestamp=2018-07-19T09:23:14.040777Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:23:14.039042Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:23:14 http: TLS handshake error from 10.244.0.1:57292: EOF level=info timestamp=2018-07-19T09:23:17.909748Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:23:24 http: TLS handshake error from 10.244.0.1:57328: EOF level=info timestamp=2018-07-19T09:23:25.713718Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:23:25.717111Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:23:34 http: TLS handshake error from 10.244.0.1:57360: EOF level=info timestamp=2018-07-19T09:23:44.183605Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:23:44.206548Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:23:44 http: TLS handshake error from 10.244.0.1:57390: EOF level=info timestamp=2018-07-19T09:23:47.901255Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:23:54 http: TLS handshake error from 10.244.0.1:57420: EOF Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:21:45.407151Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi57w8c kind= uid=16024d95-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:21:45.416711Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi57w8c kind= uid=16024d95-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:21:45.701094Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi57w8c\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi57w8c" level=info timestamp=2018-07-19T09:22:00.754837Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi57w8c\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi57w8c, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 16024d95-8b35-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi57w8c" level=info timestamp=2018-07-19T09:22:30.926704Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi495xn kind= uid=31268d25-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:22:30.929719Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi495xn kind= uid=31268d25-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:22:31.156403Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:22:31.196529Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:22:43.402095Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi495xn, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 31268d25-8b35-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:23:13.623483Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:23:13.625420Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:23:14.004006Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi47z4s\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi47z4s" level=info timestamp=2018-07-19T09:23:58.557474Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitkxht kind= uid=65622438-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:23:58.561900Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitkxht kind= uid=65622438-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:23:58.903004Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitkxht\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitkxht" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:23:29.037975Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-19T09:23:29.038670Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid=4a984859-8b35-11e8-9619-525500d15501 msg="Domain is in state Paused reason Unknown" level=info timestamp=2018-07-19T09:23:29.683867Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:23:29.684231Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid=4a984859-8b35-11e8-9619-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-19T09:23:29.755671Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:23:29.775635Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:23:29.775953Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:23:29.776144Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:23:29.797903Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmi47z4s" level=info timestamp=2018-07-19T09:23:30.022412Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:23:30.022791Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:23:30.041760Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:23:30.042939Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:23:30.041222Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:23:30.043249Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmitkxht-shm9j Pod phase: Running level=info timestamp=2018-07-19T09:24:02.711695Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:24:02.711945Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:24:02.713657Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.099 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:413 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:414 Expected error: <*errors.StatusError | 0xc421084120>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:22:29 http: TLS handshake error from 10.244.0.1:55352: EOF 2018/07/19 09:22:39 http: TLS handshake error from 10.244.0.1:55390: EOF 2018/07/19 09:22:49 http: TLS handshake error from 10.244.0.1:55422: EOF 2018/07/19 09:22:59 http: TLS handshake error from 10.244.0.1:55452: EOF 2018/07/19 09:23:09 http: TLS handshake error from 10.244.0.1:55482: EOF 2018/07/19 09:23:19 http: TLS handshake error from 10.244.0.1:55516: EOF 2018/07/19 09:23:29 http: TLS handshake error from 10.244.0.1:55552: EOF 2018/07/19 09:23:39 http: TLS handshake error from 10.244.0.1:55582: EOF 2018/07/19 09:23:49 http: TLS handshake error from 10.244.0.1:55612: EOF 2018/07/19 09:23:59 http: TLS handshake error from 10.244.0.1:55644: EOF 2018/07/19 09:24:09 http: TLS handshake error from 10.244.0.1:55680: EOF 2018/07/19 09:24:19 http: TLS handshake error from 10.244.0.1:55712: EOF 2018/07/19 09:24:29 http: TLS handshake error from 10.244.0.1:55742: EOF 2018/07/19 09:24:39 http: TLS handshake error from 10.244.0.1:55772: EOF 2018/07/19 09:24:49 http: TLS handshake error from 10.244.0.1:55806: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:23:44.206548Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:23:44 http: TLS handshake error from 10.244.0.1:57390: EOF level=info timestamp=2018-07-19T09:23:47.901255Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:23:54 http: TLS handshake error from 10.244.0.1:57420: EOF 2018/07/19 09:24:04 http: TLS handshake error from 10.244.0.1:57456: EOF level=info timestamp=2018-07-19T09:24:14.461197Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:24:14.465350Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 09:24:14 http: TLS handshake error from 10.244.0.1:57490: EOF level=info timestamp=2018-07-19T09:24:17.842094Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:24:24 http: TLS handshake error from 10.244.0.1:57520: EOF 2018/07/19 09:24:34 http: TLS handshake error from 10.244.0.1:57550: EOF 2018/07/19 09:24:44 http: TLS handshake error from 10.244.0.1:57582: EOF level=info timestamp=2018-07-19T09:24:44.663042Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:24:44.696509Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:24:47.904819Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:22:30.926704Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi495xn kind= uid=31268d25-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:22:30.929719Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi495xn kind= uid=31268d25-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:22:31.156403Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:22:31.196529Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:22:43.402095Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi495xn, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 31268d25-8b35-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:23:13.623483Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:23:13.625420Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:23:14.004006Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi47z4s\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi47z4s" level=info timestamp=2018-07-19T09:23:58.557474Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitkxht kind= uid=65622438-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:23:58.561900Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitkxht kind= uid=65622438-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:23:58.903004Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitkxht\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitkxht" level=info timestamp=2018-07-19T09:24:43.790006Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqs7s7 kind= uid=804c8ed0-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:24:43.803533Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqs7s7 kind= uid=804c8ed0-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:24:44.066792Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqs7s7\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqs7s7" level=info timestamp=2018-07-19T09:24:44.103530Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqs7s7\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqs7s7" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:23:29.037975Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-19T09:23:29.038670Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid=4a984859-8b35-11e8-9619-525500d15501 msg="Domain is in state Paused reason Unknown" level=info timestamp=2018-07-19T09:23:29.683867Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:23:29.684231Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid=4a984859-8b35-11e8-9619-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-19T09:23:29.755671Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:23:29.775635Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:23:29.775953Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:23:29.776144Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:23:29.797903Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmi47z4s" level=info timestamp=2018-07-19T09:23:30.022412Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:23:30.022791Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:23:30.041760Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:23:30.042939Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:23:30.041222Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:23:30.043249Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmiqs7s7-nvt87 Pod phase: Running level=info timestamp=2018-07-19T09:24:48.372944Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:24:48.373195Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:24:48.375089Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmitkxht-shm9j Pod phase: Failed level=info timestamp=2018-07-19T09:24:02.711695Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:24:02.711945Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:24:02.713657Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-19T09:24:12.724551Z pos=libvirt.go:271 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-19T09:24:12.774571Z pos=virt-launcher.go:143 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmitkxht" level=info timestamp=2018-07-19T09:24:12.777963Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-19T09:24:12.778563Z pos=virt-launcher.go:60 component=virt-launcher msg="Marked as ready" caught signal virt-launcher exited with code 127 Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.100 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address in non-conventional format [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:425 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:426 Expected error: <*errors.StatusError | 0xc421084cf0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:23:09 http: TLS handshake error from 10.244.0.1:55482: EOF 2018/07/19 09:23:19 http: TLS handshake error from 10.244.0.1:55516: EOF 2018/07/19 09:23:29 http: TLS handshake error from 10.244.0.1:55552: EOF 2018/07/19 09:23:39 http: TLS handshake error from 10.244.0.1:55582: EOF 2018/07/19 09:23:49 http: TLS handshake error from 10.244.0.1:55612: EOF 2018/07/19 09:23:59 http: TLS handshake error from 10.244.0.1:55644: EOF 2018/07/19 09:24:09 http: TLS handshake error from 10.244.0.1:55680: EOF 2018/07/19 09:24:19 http: TLS handshake error from 10.244.0.1:55712: EOF 2018/07/19 09:24:29 http: TLS handshake error from 10.244.0.1:55742: EOF 2018/07/19 09:24:39 http: TLS handshake error from 10.244.0.1:55772: EOF 2018/07/19 09:24:49 http: TLS handshake error from 10.244.0.1:55806: EOF 2018/07/19 09:24:59 http: TLS handshake error from 10.244.0.1:55842: EOF 2018/07/19 09:25:09 http: TLS handshake error from 10.244.0.1:55872: EOF 2018/07/19 09:25:19 http: TLS handshake error from 10.244.0.1:55902: EOF 2018/07/19 09:25:29 http: TLS handshake error from 10.244.0.1:55934: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running 2018/07/19 09:24:34 http: TLS handshake error from 10.244.0.1:57550: EOF 2018/07/19 09:24:44 http: TLS handshake error from 10.244.0.1:57582: EOF level=info timestamp=2018-07-19T09:24:44.663042Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:24:44.696509Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:24:47.904819Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:24:54 http: TLS handshake error from 10.244.0.1:57618: EOF 2018/07/19 09:25:04 http: TLS handshake error from 10.244.0.1:57650: EOF 2018/07/19 09:25:14 http: TLS handshake error from 10.244.0.1:57680: EOF level=info timestamp=2018-07-19T09:25:14.853895Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:25:14.871993Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:25:17.875887Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:25:24 http: TLS handshake error from 10.244.0.1:57710: EOF level=info timestamp=2018-07-19T09:25:25.728055Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:25:25.734079Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:25:34 http: TLS handshake error from 10.244.0.1:57744: EOF Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:22:31.196529Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:22:43.402095Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi495xn\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi495xn, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 31268d25-8b35-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi495xn" level=info timestamp=2018-07-19T09:23:13.623483Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:23:13.625420Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:23:14.004006Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi47z4s\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi47z4s" level=info timestamp=2018-07-19T09:23:58.557474Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitkxht kind= uid=65622438-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:23:58.561900Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitkxht kind= uid=65622438-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:23:58.903004Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitkxht\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitkxht" level=info timestamp=2018-07-19T09:24:43.790006Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqs7s7 kind= uid=804c8ed0-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:24:43.803533Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqs7s7 kind= uid=804c8ed0-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:24:44.066792Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqs7s7\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqs7s7" level=info timestamp=2018-07-19T09:24:44.103530Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqs7s7\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqs7s7" level=info timestamp=2018-07-19T09:24:58.684713Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqs7s7\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiqs7s7, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 804c8ed0-8b35-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqs7s7" level=info timestamp=2018-07-19T09:25:28.769930Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqzh6w kind= uid=9b28f757-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:25:28.773984Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqzh6w kind= uid=9b28f757-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:23:29.037975Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-19T09:23:29.038670Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid=4a984859-8b35-11e8-9619-525500d15501 msg="Domain is in state Paused reason Unknown" level=info timestamp=2018-07-19T09:23:29.683867Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:23:29.684231Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid=4a984859-8b35-11e8-9619-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-19T09:23:29.755671Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:23:29.775635Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:23:29.775953Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:23:29.776144Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:23:29.797903Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmi47z4s" level=info timestamp=2018-07-19T09:23:30.022412Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:23:30.022791Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:23:30.041760Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:23:30.042939Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:23:30.041222Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:23:30.043249Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmiqzh6w-bfm8f Pod phase: Running level=info timestamp=2018-07-19T09:25:33.515132Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:25:33.515882Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:25:33.517732Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.093 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address and slirp interface [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:438 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:439 Expected error: <*errors.StatusError | 0xc4206a4240>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Pod name: disks-images-provider-bzw9c Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z56nl Pod phase: Running Pod name: virt-api-7d79764579-gb9bw Pod phase: Running Pod name: virt-api-7d79764579-msdkp Pod phase: Running 2018/07/19 09:23:59 http: TLS handshake error from 10.244.0.1:55644: EOF 2018/07/19 09:24:09 http: TLS handshake error from 10.244.0.1:55680: EOF 2018/07/19 09:24:19 http: TLS handshake error from 10.244.0.1:55712: EOF 2018/07/19 09:24:29 http: TLS handshake error from 10.244.0.1:55742: EOF 2018/07/19 09:24:39 http: TLS handshake error from 10.244.0.1:55772: EOF 2018/07/19 09:24:49 http: TLS handshake error from 10.244.0.1:55806: EOF 2018/07/19 09:24:59 http: TLS handshake error from 10.244.0.1:55842: EOF 2018/07/19 09:25:09 http: TLS handshake error from 10.244.0.1:55872: EOF 2018/07/19 09:25:19 http: TLS handshake error from 10.244.0.1:55902: EOF 2018/07/19 09:25:29 http: TLS handshake error from 10.244.0.1:55934: EOF 2018/07/19 09:25:39 http: TLS handshake error from 10.244.0.1:55970: EOF 2018/07/19 09:25:49 http: TLS handshake error from 10.244.0.1:56002: EOF 2018/07/19 09:25:59 http: TLS handshake error from 10.244.0.1:56032: EOF 2018/07/19 09:26:09 http: TLS handshake error from 10.244.0.1:56062: EOF 2018/07/19 09:26:19 http: TLS handshake error from 10.244.0.1:56096: EOF Pod name: virt-api-7d79764579-w2776 Pod phase: Running level=info timestamp=2018-07-19T09:25:17.875887Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:25:24 http: TLS handshake error from 10.244.0.1:57710: EOF level=info timestamp=2018-07-19T09:25:25.728055Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-19T09:25:25.734079Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:25:34 http: TLS handshake error from 10.244.0.1:57744: EOF 2018/07/19 09:25:44 http: TLS handshake error from 10.244.0.1:57780: EOF level=info timestamp=2018-07-19T09:25:45.139015Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:25:45.142555Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:25:47.951058Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 09:25:54 http: TLS handshake error from 10.244.0.1:57810: EOF 2018/07/19 09:26:04 http: TLS handshake error from 10.244.0.1:57840: EOF 2018/07/19 09:26:14 http: TLS handshake error from 10.244.0.1:57872: EOF level=info timestamp=2018-07-19T09:26:15.314755Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:26:15.317585Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T09:26:17.918679Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-5h64g Pod phase: Running level=info timestamp=2018-07-19T09:23:13.625420Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:23:14.004006Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi47z4s\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi47z4s" level=info timestamp=2018-07-19T09:23:58.557474Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitkxht kind= uid=65622438-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:23:58.561900Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitkxht kind= uid=65622438-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:23:58.903004Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitkxht\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitkxht" level=info timestamp=2018-07-19T09:24:43.790006Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqs7s7 kind= uid=804c8ed0-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:24:43.803533Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqs7s7 kind= uid=804c8ed0-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:24:44.066792Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqs7s7\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqs7s7" level=info timestamp=2018-07-19T09:24:44.103530Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqs7s7\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqs7s7" level=info timestamp=2018-07-19T09:24:58.684713Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqs7s7\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiqs7s7, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 804c8ed0-8b35-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqs7s7" level=info timestamp=2018-07-19T09:25:28.769930Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqzh6w kind= uid=9b28f757-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:25:28.773984Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqzh6w kind= uid=9b28f757-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T09:25:43.869730Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiqzh6w\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiqzh6w, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 9b28f757-8b35-11e8-9619-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiqzh6w" level=info timestamp=2018-07-19T09:26:14.117079Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis4lp5 kind= uid=b6252aa8-8b35-11e8-9619-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T09:26:14.125814Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmis4lp5 kind= uid=b6252aa8-8b35-11e8-9619-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-mwzlh Pod phase: Running Pod name: virt-controller-7d57d96b65-nc8jl Pod phase: Running level=info timestamp=2018-07-19T09:00:02.506323Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-dj88w Pod phase: Running Pod name: virt-handler-zjzfz Pod phase: Running level=info timestamp=2018-07-19T09:23:29.037975Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-19T09:23:29.038670Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid=4a984859-8b35-11e8-9619-525500d15501 msg="Domain is in state Paused reason Unknown" level=info timestamp=2018-07-19T09:23:29.683867Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:23:29.684231Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid=4a984859-8b35-11e8-9619-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-19T09:23:29.755671Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T09:23:29.775635Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind= uid=4a984859-8b35-11e8-9619-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:23:29.775953Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T09:23:29.776144Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T09:23:29.797903Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="grace period expired, killing deleted VirtualMachineInstance testvmi47z4s" level=info timestamp=2018-07-19T09:23:30.022412Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:23:30.022791Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T09:23:30.041760Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T09:23:30.042939Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T09:23:30.041222Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T09:23:30.043249Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi47z4s kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmis4lp5-bcnnb Pod phase: Running level=info timestamp=2018-07-19T09:26:18.019798Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-19T09:26:18.020063Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-19T09:26:18.021757Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" Pod name: virt-launcher-testvmiwmvcc-jz6cl Pod phase: Running • Failure in Spec Setup (BeforeEach) [45.588 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with disabled automatic attachment of interfaces [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:451 should not configure any external interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:452 Expected error: <*errors.StatusError | 0xc420140ab0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:146 ------------------------------ Waiting for namespace kubevirt-test-default to be removed, this can take a while ... panic: test timed out after 1h30m0s goroutine 7016 [running]: testing.(*M).startAlarm.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1240 +0xfc created by time.goFunc /gimme/.gimme/versions/go1.10.linux.amd64/src/time/sleep.go:172 +0x44 goroutine 1 [chan receive, 90 minutes]: testing.(*T).Run(0xc4209141e0, 0x13079b4, 0x9, 0x1395ce8, 0x480456) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:825 +0x301 testing.runTests.func1(0xc4209140f0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1063 +0x64 testing.tRunner(0xc4209140f0, 0xc4208bbdf8) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 testing.runTests(0xc420896de0, 0x1c31250, 0x1, 0x1, 0x412009) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1061 +0x2c4 testing.(*M).Run(0xc420690080, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:978 +0x171 main.main() _testmain.go:44 +0x151 goroutine 12 [chan receive]: kubevirt.io/kubevirt/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x1c5c800) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:879 +0x8b created by kubevirt.io/kubevirt/vendor/github.com/golang/glog.init.0 /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:410 +0x203 goroutine 13 [syscall, 90 minutes]: os/signal.signal_recv(0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/sigqueue.go:139 +0xa6 os/signal.loop() /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:22 +0x22 created by os/signal.init.0 /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:28 +0x41 goroutine 15 [select]: kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).match(0xc420bdb600, 0x14267e0, 0x1c7aea0, 0x412801, 0x0, 0x0, 0x0, 0x1c7aea0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:139 +0x2e6 kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).Should(0xc420bdb600, 0x14267e0, 0x1c7aea0, 0x0, 0x0, 0x0, 0xc420bdb600) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:48 +0x62 kubevirt.io/kubevirt/tests.removeNamespaces() /root/go/src/kubevirt.io/kubevirt/tests/utils.go:733 +0x2c9 kubevirt.io/kubevirt/tests.AfterTestSuitCleanup() /root/go/src/kubevirt.io/kubevirt/tests/utils.go:319 +0x7c kubevirt.io/kubevirt/tests_test.glob..func11() /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:51 +0x20 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc4203bdc20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:109 +0x9c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc4203bdc20, 0x4d6e25b93a7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:63 +0x13e kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*simpleSuiteNode).Run(0xc42003b0e0, 0x1, 0x1, 0x0, 0x0, 0x1417cc0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/suite_nodes.go:24 +0x8f kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runAfterSuite(0xc42029aa00, 0x1396900) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:136 +0xd5 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc42029aa00, 0xb) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:69 +0x99 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc420236050, 0x7f38f9bb6660, 0xc4209141e0, 0x1309f75, 0xb, 0xc420896e20, 0x2, 0x2, 0x1434820, 0xc420259d40, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x27c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x1419e20, 0xc4209141e0, 0x1309f75, 0xb, 0xc420896e00, 0x2, 0x2, 0x2) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:218 +0x258 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x1419e20, 0xc4209141e0, 0x1309f75, 0xb, 0xc4202f5b00, 0x1, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:206 +0xab kubevirt.io/kubevirt/tests_test.TestTests(0xc4209141e0) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc4209141e0, 0x1395ce8) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:824 +0x2e0 goroutine 16 [chan receive, 90 minutes]: kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).registerForInterrupts(0xc42029aa00) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:220 +0xc0 created by kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:59 +0x60 goroutine 28 [select, 90 minutes, locked to thread]: runtime.gopark(0x1397de0, 0x0, 0x130451f, 0x6, 0x18, 0x1) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/proc.go:291 +0x11a runtime.selectgo(0xc42046f750, 0xc4200de300) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/select.go:392 +0xe50 runtime.ensureSigM.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/signal_unix.go:549 +0x1f4 runtime.goexit() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/asm_amd64.s:2361 +0x1 goroutine 54 [IO wait]: internal/poll.runtime_pollWait(0x7f38f9c45f00, 0x72, 0xc420f1f850) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/netpoll.go:173 +0x57 internal/poll.(*pollDesc).wait(0xc4205e1f18, 0x72, 0xffffffffffffff00, 0x141afe0, 0x1b487d0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:85 +0x9b internal/poll.(*pollDesc).waitRead(0xc4205e1f18, 0xc420676000, 0x8000, 0x8000) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:90 +0x3d internal/poll.(*FD).Read(0xc4205e1f00, 0xc420676000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_unix.go:157 +0x17d net.(*netFD).Read(0xc4205e1f00, 0xc420676000, 0x8000, 0x8000, 0x0, 0x8, 0x7ffb) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/fd_unix.go:202 +0x4f net.(*conn).Read(0xc4204de540, 0xc420676000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/net.go:176 +0x6a crypto/tls.(*block).readFromUntil(0xc4200ed080, 0x7f38f9bb68e0, 0xc4204de540, 0x5, 0xc4204de540, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:493 +0x96 crypto/tls.(*Conn).readRecord(0xc420238e00, 0x1397f17, 0xc420238f20, 0x20) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:595 +0xe0 crypto/tls.(*Conn).Read(0xc420238e00, 0xc4202bd000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:1156 +0x100 bufio.(*Reader).Read(0xc42089a720, 0xc4203a0e38, 0x9, 0x9, 0xc420b845f8, 0xc420a04fa0, 0xc420f1fd10) /gimme/.gimme/versions/go1.10.linux.amd64/src/bufio/bufio.go:216 +0x238 io.ReadAtLeast(0x1417980, 0xc42089a720, 0xc4203a0e38, 0x9, 0x9, 0x9, 0xc420f1fce0, 0xc420f1fce0, 0x406614) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:309 +0x86 io.ReadFull(0x1417980, 0xc42089a720, 0xc4203a0e38, 0x9, 0x9, 0xc420b845a0, 0xc420f1fd10, 0xc400002901) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:327 +0x58 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.readFrameHeader(0xc4203a0e38, 0x9, 0x9, 0x1417980, 0xc42089a720, 0x0, 0xc400000000, 0x87d64d, 0xc420f1ffb0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:237 +0x7b kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc4203a0e00, 0xc420b0f710, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:492 +0xa4 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc420f1ffb0, 0x1396bf8, 0xc4204677b0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1428 +0x8e kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc420273380) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1354 +0x76 created by kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Transport).newClientConn /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:579 +0x651 goroutine 4635 [chan send, 46 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc420ef4930) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 5509 [chan send, 40 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc420e702a0) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 5359 [chan send, 40 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc420929f50) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 4144 [chan send, 44 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc420befc20) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 2657 [chan send, 67 minutes]: kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1.1(0x1450b00, 0xc4204ddb00, 0xc4200f64c0, 0xc420aad500, 0xc4200f6d60, 0xc4200f6d70) /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:81 +0x138 created by kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1 /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:73 +0x386 goroutine 4933 [chan send, 45 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc420d0c9f0) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 4600 [chan send, 47 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc420beec90) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 5609 [chan send, 39 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4207c4d20) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 make: *** [functest] Error 2 + make cluster-down ./cluster/down.sh