+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2 + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2 + [[ k8s-1.11.0-release =~ openshift-.* ]] + [[ k8s-1.11.0-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/08/06 19:03:51 Waiting for host: 192.168.66.101:22 2018/08/06 19:03:54 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/06 19:04:02 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/06 19:04:08 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/08/06 19:04:13 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0806 19:04:14.702897 1302 feature_gate.go:230] feature gates: &{map[]} I0806 19:04:14.811925 1302 kernel_validator.go:81] Validating kernel version I0806 19:04:14.812309 1302 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 59.517791 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:bcee97c1027b09d9d53f79cfe56a69a6228412f4a60edbb1b1adf4217272ccdd + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/06 19:05:37 Waiting for host: 192.168.66.102:22 2018/08/06 19:05:40 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/06 19:05:48 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/06 19:05:54 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/08/06 19:05:59 Connected to tcp://192.168.66.102:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0806 19:06:00.170330 1303 kernel_validator.go:81] Validating kernel version I0806 19:06:00.170813 1303 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 1m v1.11.0 node02 Ready 25s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 1m v1.11.0 node02 Ready 26s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33387/kubevirt/virt-controller:devel Untagged: localhost:33387/kubevirt/virt-controller@sha256:438cf92f85bcb947841925477cea383b8eee79de3423f1dac2dcbbf9ae28c96e Deleted: sha256:5b9cd98f40042d2cd8e1ad23500209c7c8ed092d9be31c3fa7bab5c54d3c416f Untagged: localhost:33387/kubevirt/virt-launcher:devel Untagged: localhost:33387/kubevirt/virt-launcher@sha256:fd7f9742a14f8cce79b00e6cc0786551e79c02d5ba943b02132316797da8ce92 Deleted: sha256:9af31fceba743c8c525199f0a62c99743a04c8b1a50529404feba481a5a3b5ce Untagged: localhost:33387/kubevirt/virt-handler:devel Untagged: localhost:33387/kubevirt/virt-handler@sha256:181c5cf56fdffcf30c9e22ffdda542d041af35633cf18a60f20aa006ca6ce0bd Deleted: sha256:5f1235d0d6c17b3489ee1c57328bb9d8b374b3d5b80b53a72119996f8f6f5ed4 Untagged: localhost:33387/kubevirt/virt-api:devel Untagged: localhost:33387/kubevirt/virt-api@sha256:abef94b422125262318a7e7d3a6b8e1618ef42863c5f4f5d13ac0190858f1877 Deleted: sha256:085b8204250f8a1fae9069d19bd84f27d399b1aa8d2ddbd4b59cdb3819783351 Untagged: localhost:33387/kubevirt/subresource-access-test:devel Untagged: localhost:33387/kubevirt/subresource-access-test@sha256:fcd4c183ba42228e025a278f899aab3580dfd0a309e3108d55b3c52d96808df3 Deleted: sha256:69ab4d16cf65023cdb653ebc0cc9c79d11a3c90859b346aaa9363a2065ac846b Untagged: localhost:33387/kubevirt/example-hook-sidecar:devel Untagged: localhost:33387/kubevirt/example-hook-sidecar@sha256:01f4cbf9147b66d3194ad7bb9b13d6767cef52c08d7bb9545555e8b01321e6d3 Deleted: sha256:4c0d1aaea789ee0e39a0f4922ae3bde5ffb9d690dcf81bab4617fc28c65f506f Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> b1088795aeb6 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 88f43b954f9f Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> 06c70b43758a Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> e5b3ae738662 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 3e3d43f49e45 Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 73e8a3aa263a Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> bc244b1c712b Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 4cd1786b2bc8 Step 10/12 : RUN pip install j2cli ---> Using cache ---> b51a532fa53a Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 3bc0185264f6 Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> dcf2b21fa2ed Successfully built dcf2b21fa2ed go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> b1088795aeb6 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 88f43b954f9f Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> 06c70b43758a Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> e5b3ae738662 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 3e3d43f49e45 Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 73e8a3aa263a Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> bc244b1c712b Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 4cd1786b2bc8 Step 10/12 : RUN pip install j2cli ---> Using cache ---> b51a532fa53a Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 3bc0185264f6 Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> dcf2b21fa2ed Successfully built dcf2b21fa2ed go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.39 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> b00c84523b53 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> b76b8bd8cd39 Step 5/8 : USER 1001 ---> Using cache ---> b6d9ad9ed232 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> d52eb1f0c92a Removing intermediate container 9478413fbb35 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in b7e146abf9d5 ---> 6f40c653377f Removing intermediate container b7e146abf9d5 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release0" '' "virt-controller" '' ---> Running in 9db866d85e6c ---> d9470ae6015b Removing intermediate container 9db866d85e6c Successfully built d9470ae6015b Sending build context to Docker daemon 43.32 MB Step 1/9 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 945996802736 Step 3/9 : RUN dnf -y install socat genisoimage && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 1dcd22d08d0e Step 4/9 : COPY virt-launcher /usr/bin/virt-launcher ---> 0eff6b0fdbf3 Removing intermediate container a85ba5ca7db0 Step 5/9 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 69dd5f61f65b  ---> ca0235cfa852 Removing intermediate container 69dd5f61f65b Step 6/9 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in f633d57e4c27  ---> fa10a1a630f5 Removing intermediate container f633d57e4c27 Step 7/9 : COPY sock-connector /usr/share/kubevirt/virt-launcher/ ---> 81364cb8e2bd Removing intermediate container c723c1507b3f Step 8/9 : ENTRYPOINT /usr/bin/virt-launcher ---> Running in 6f6b3c3965f1 ---> 54e3037ea9e4 Removing intermediate container 6f6b3c3965f1 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release0" '' "virt-launcher" '' ---> Running in 575a5362bb71 ---> 5725c40336b8 Removing intermediate container 575a5362bb71 Successfully built 5725c40336b8 Sending build context to Docker daemon 41.69 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 96fda3e01f90 Removing intermediate container 98dba63fd1c7 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in e9fd996a553d ---> 0891291da1cb Removing intermediate container e9fd996a553d Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release0" '' "virt-handler" '' ---> Running in a997e6fff482 ---> 9f50695872a1 Removing intermediate container a997e6fff482 Successfully built 9f50695872a1 Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> ed1ebf600ee1 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 0769dad023e5 Step 5/8 : USER 1001 ---> Using cache ---> 0cb65afb0c2b Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 48e6d490865f Removing intermediate container 99e8bb921ffe Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 50ccb2e80f49 ---> f8c9885a02a8 Removing intermediate container 50ccb2e80f49 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release0" '' "virt-api" '' ---> Running in 74a8be970644 ---> ca8cec1b00a0 Removing intermediate container 74a8be970644 Successfully built ca8cec1b00a0 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/7 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 02134835a6aa Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> ec0843818da7 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 754029bb4bd2 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.11.0-release0" '' ---> Using cache ---> 2ce1eec77729 Successfully built 2ce1eec77729 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/5 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 207487abe7b2 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release0" '' "vm-killer" '' ---> Using cache ---> 51b62a0ecd10 Successfully built 51b62a0ecd10 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 5734d749eb5c Step 3/7 : ENV container docker ---> Using cache ---> f8775a77966f Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 1a40cf222a61 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 77b545d92fe7 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> dfe20d463305 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release0" '' "registry-disk-v1alpha" '' ---> Using cache ---> d47274c50623 Successfully built d47274c50623 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33713/kubevirt/registry-disk-v1alpha:devel ---> d47274c50623 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 94c21c65872a Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 48605500da55 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-release0" '' ---> Using cache ---> 4b8200fc9235 Successfully built 4b8200fc9235 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33713/kubevirt/registry-disk-v1alpha:devel ---> d47274c50623 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 8e698fc18d9d Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> e9c68c4dc91c Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-release0" '' ---> Using cache ---> 98232324771d Successfully built 98232324771d Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33713/kubevirt/registry-disk-v1alpha:devel ---> d47274c50623 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 8e698fc18d9d Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 38176d1a3214 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-release0" '' ---> Using cache ---> 8c0a79fc816d Successfully built 8c0a79fc816d Sending build context to Docker daemon 35.59 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 985fe391c056 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 3b2cae8ac543 Step 5/8 : USER 1001 ---> Using cache ---> 0c06e5b4a900 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> ce9065509261 Removing intermediate container 5cf188ba0b3b Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in d89cab2e76be ---> 4ce58c008d58 Removing intermediate container d89cab2e76be Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release0" '' "subresource-access-test" '' ---> Running in 5426a2567578 ---> 4826befab08e Removing intermediate container 5426a2567578 Successfully built 4826befab08e Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/9 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> d3456b1644b1 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 0ba81fddbba1 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 5d33abe3f819 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 783826523be1 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 711bc8d15952 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release0" '' "winrmcli" '' ---> Using cache ---> 671374ffcc63 Successfully built 671374ffcc63 Sending build context to Docker daemon 36.8 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> e3238544ad97 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> aa9080e63797 Removing intermediate container f127bf11b31e Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in e8ff60b196d2 ---> ca0c61b16ea3 Removing intermediate container e8ff60b196d2 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.11.0-release0" '' ---> Running in 6c45db3916ed ---> 120d986885eb Removing intermediate container 6c45db3916ed Successfully built 120d986885eb hack/build-docker.sh push The push refers to a repository [localhost:33713/kubevirt/virt-controller] 7defce7c0039: Preparing aa89340cf7a8: Preparing 891e1e4ef82a: Preparing aa89340cf7a8: Pushed 7defce7c0039: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:658f136eb45c7d66f039b407681a271b35814250a72fcb1e9545cbe3757a3d34 size: 949 The push refers to a repository [localhost:33713/kubevirt/virt-launcher] 5a41481eecee: Preparing 9b48fbe542dd: Preparing f2ce5c804edb: Preparing 9b4fddeb54f2: Preparing af293cb2890d: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing da38cf808aa5: Waiting fa6154170bf5: Waiting 891e1e4ef82a: Waiting 186d8b3e4fd8: Waiting 9b48fbe542dd: Pushed 5a41481eecee: Pushed b83399358a92: Pushed da38cf808aa5: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller f2ce5c804edb: Pushed af293cb2890d: Pushed 9b4fddeb54f2: Pushed 5eefb9960a36: Pushed devel: digest: sha256:6d3c87ecfd369c90237eda9f7f1721580f5132d05034bc96d54cf3e0f2a43e6b size: 2620 The push refers to a repository [localhost:33713/kubevirt/virt-handler] 5e4b2483a98d: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 5e4b2483a98d: Pushed devel: digest: sha256:ee10e6149e69531661aed7d9edfd3c5c0446bb4487c55beac0f903799f64dcf5 size: 741 The push refers to a repository [localhost:33713/kubevirt/virt-api] 6b5abdaf128b: Preparing 82fc744c99b4: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 82fc744c99b4: Pushed 6b5abdaf128b: Pushed devel: digest: sha256:cec6b8f5c59c3c8fc3e365843fa8411ef6f1ee2578f4081a499e31a78187a650 size: 948 The push refers to a repository [localhost:33713/kubevirt/disks-images-provider] 71ad31feb2c5: Preparing 21d4b721776e: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 71ad31feb2c5: Pushed 21d4b721776e: Pushed devel: digest: sha256:cd187e542554d6c701bfb0cb08f65a76d23fa75be34176dc4188c76ec468c9f4 size: 948 The push refers to a repository [localhost:33713/kubevirt/vm-killer] c4cfadeeaf5f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider c4cfadeeaf5f: Pushed devel: digest: sha256:72afae217b15b5f36f1509e4a78d76f16b21e607b318d2e701cfe8567738c9dc size: 740 The push refers to a repository [localhost:33713/kubevirt/registry-disk-v1alpha] 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 661cce8d8e52: Pushed 41e0baba3077: Pushed 25edbec0eaea: Pushed devel: digest: sha256:4beab74b750c384fed6e9272292cd4f3885cf63160b4cc8be85dcb7eb12fd910 size: 948 The push refers to a repository [localhost:33713/kubevirt/cirros-registry-disk-demo] 796d82cd42db: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 661cce8d8e52: Mounted from kubevirt/registry-disk-v1alpha 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 41e0baba3077: Mounted from kubevirt/registry-disk-v1alpha 796d82cd42db: Pushed devel: digest: sha256:caa473d35dda5da1c955c07c18584bca173fc3ac1655c04509b42465d78e6565 size: 1160 The push refers to a repository [localhost:33713/kubevirt/fedora-cloud-registry-disk-demo] 3cf3799e71d4: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 41e0baba3077: Mounted from kubevirt/cirros-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo 661cce8d8e52: Mounted from kubevirt/cirros-registry-disk-demo 3cf3799e71d4: Pushed devel: digest: sha256:949d4de4d95938a99ac539d20780234a3f06cc98e3d0113d383d45e98b360c10 size: 1161 The push refers to a repository [localhost:33713/kubevirt/alpine-registry-disk-demo] c59c11514491: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 41e0baba3077: Mounted from kubevirt/fedora-cloud-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo 661cce8d8e52: Mounted from kubevirt/fedora-cloud-registry-disk-demo c59c11514491: Pushed devel: digest: sha256:f95f7a6825d39ae4570cd8b9e1fb8d77f390e328d9d8a0e516f222809071b61c size: 1160 The push refers to a repository [localhost:33713/kubevirt/subresource-access-test] e23c793145cf: Preparing 25cb73590a9d: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 25cb73590a9d: Pushed e23c793145cf: Pushed devel: digest: sha256:151f255f2a52c3ffeed3f972d561b2f81e321bde4bbef43d5377279cc152e6ff size: 948 The push refers to a repository [localhost:33713/kubevirt/winrmcli] f8083e002d0b: Preparing 53c709abc882: Preparing 9ca98a0f492b: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test f8083e002d0b: Pushed 9ca98a0f492b: Pushed 53c709abc882: Pushed devel: digest: sha256:a1fae1967d100edf5b23924a81a158c98d867422874226220754a604b2c256af size: 1165 The push refers to a repository [localhost:33713/kubevirt/example-hook-sidecar] a9548fb7a4ec: Preparing 39bae602f753: Preparing a9548fb7a4ec: Pushed 39bae602f753: Pushed devel: digest: sha256:b2850b30f509862ce024fad95d633f3052ac5f0ddf98ac40b4513241fe4cea1b size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.11.0-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.11.0-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.11.0-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-193-g6e46eba ++ KUBEVIRT_VERSION=v0.7.0-193-g6e46eba + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace image_pull_policy ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system +++ image_pull_policy=IfNotPresent ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33713/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace image_pull_policy + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.11.0-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.11.0-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.11.0-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-193-g6e46eba ++ KUBEVIRT_VERSION=v0.7.0-193-g6e46eba + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace image_pull_policy ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system +++ image_pull_policy=IfNotPresent ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33713/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace image_pull_policy + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.11.0-release ]] + [[ k8s-1.11.0-release =~ .*-dev ]] + [[ k8s-1.11.0-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io/kubevirt.io:admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:edit created clusterrole.rbac.authorization.k8s.io/kubevirt.io:view created serviceaccount/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver-auth-delegator created rolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created role.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-controller created serviceaccount/kubevirt-controller created serviceaccount/kubevirt-privileged created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller-cluster-admin created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-privileged-cluster-admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:default created clusterrolebinding.rbac.authorization.k8s.io/kubevirt.io:default created service/virt-api created deployment.extensions/virt-api created deployment.extensions/virt-controller created daemonset.extensions/virt-handler created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstances.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancereplicasets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancepresets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachines.kubevirt.io created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim/disk-alpine created persistentvolume/host-path-disk-alpine created persistentvolumeclaim/disk-custom created persistentvolume/host-path-disk-custom created daemonset.extensions/disks-images-provider created serviceaccount/kubevirt-testing created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-testing-cluster-admin created + [[ k8s-1.11.0 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n 'virt-api-bcc6b587d-6jsc9 0/1 ContainerCreating 0 2s virt-api-bcc6b587d-tsh68 0/1 ContainerCreating 0 2s virt-controller-67dcdd8464-2tf28 0/1 ContainerCreating 0 2s virt-controller-67dcdd8464-l7wpq 0/1 ContainerCreating 0 2s virt-handler-cfch6 0/1 ContainerCreating 0 2s virt-handler-pl2rj 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running virt-api-bcc6b587d-6jsc9 0/1 ContainerCreating 0 3s virt-api-bcc6b587d-tsh68 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-2tf28 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-l7wpq 0/1 ContainerCreating 0 3s virt-handler-cfch6 0/1 ContainerCreating 0 3s virt-handler-pl2rj 0/1 ContainerCreating 0 3s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n 'false false' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers false false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-52fj5 1/1 Running 0 15m coredns-78fcdf6894-q65v4 1/1 Running 0 15m disks-images-provider-dmzgd 1/1 Running 0 1m disks-images-provider-z2kx4 1/1 Running 0 1m etcd-node01 1/1 Running 0 14m kube-apiserver-node01 1/1 Running 0 15m kube-controller-manager-node01 1/1 Running 0 14m kube-flannel-ds-99cbg 1/1 Running 0 15m kube-flannel-ds-r28wx 1/1 Running 0 15m kube-proxy-9ms6x 1/1 Running 0 15m kube-proxy-hzbm7 1/1 Running 0 15m kube-scheduler-node01 1/1 Running 0 15m virt-api-bcc6b587d-6jsc9 1/1 Running 0 1m virt-api-bcc6b587d-tsh68 1/1 Running 0 1m virt-controller-67dcdd8464-2tf28 1/1 Running 0 1m virt-controller-67dcdd8464-l7wpq 1/1 Running 0 1m virt-handler-cfch6 1/1 Running 0 1m virt-handler-pl2rj 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default NAME READY STATUS RESTARTS AGE local-volume-provisioner-rzbx7 1/1 Running 0 15m local-volume-provisioner-w99bn 1/1 Running 0 15m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/junit.xml' + [[ k8s-1.11.0-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release@2/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" Sending build context to Docker daemon 5.632 kB Step 1/12 : FROM fedora:28 ---> cc510acfcd70 Step 2/12 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> b1088795aeb6 Step 3/12 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 88f43b954f9f Step 4/12 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> 06c70b43758a Step 5/12 : ENV GIMME_GO_VERSION 1.10 ---> Using cache ---> e5b3ae738662 Step 6/12 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 3e3d43f49e45 Step 7/12 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 73e8a3aa263a Step 8/12 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> bc244b1c712b Step 9/12 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 4cd1786b2bc8 Step 10/12 : RUN pip install j2cli ---> Using cache ---> b51a532fa53a Step 11/12 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> 3bc0185264f6 Step 12/12 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> dcf2b21fa2ed Successfully built dcf2b21fa2ed go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1533583396 Will run 151 of 151 specs Pod name: disks-images-provider-dmzgd Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running level=info timestamp=2018-08-06T19:23:25.780984Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:23:35 http: TLS handshake error from 10.244.0.1:49836: EOF 2018/08/06 19:23:45 http: TLS handshake error from 10.244.0.1:49900: EOF 2018/08/06 19:23:55 http: TLS handshake error from 10.244.0.1:49960: EOF level=info timestamp=2018-08-06T19:23:55.802839Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:24:05 http: TLS handshake error from 10.244.0.1:50020: EOF 2018/08/06 19:24:15 http: TLS handshake error from 10.244.0.1:50080: EOF 2018/08/06 19:24:25 http: TLS handshake error from 10.244.0.1:50140: EOF level=info timestamp=2018-08-06T19:24:25.772941Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:24:35 http: TLS handshake error from 10.244.0.1:50200: EOF 2018/08/06 19:24:45 http: TLS handshake error from 10.244.0.1:50264: EOF 2018/08/06 19:24:55 http: TLS handshake error from 10.244.0.1:50324: EOF level=info timestamp=2018-08-06T19:24:55.785684Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:25:05 http: TLS handshake error from 10.244.0.1:50384: EOF 2018/08/06 19:25:15 http: TLS handshake error from 10.244.0.1:50444: EOF Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running 2018/08/06 19:24:16 http: TLS handshake error from 10.244.1.1:41340: EOF level=info timestamp=2018-08-06T19:24:17.628488Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:24:17.971072Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:24:26 http: TLS handshake error from 10.244.1.1:41346: EOF level=info timestamp=2018-08-06T19:24:28.414476Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:24:34.008979Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:24:36.081396Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" 2018/08/06 19:24:36 http: TLS handshake error from 10.244.1.1:41356: EOF 2018/08/06 19:24:46 http: TLS handshake error from 10.244.1.1:41362: EOF level=info timestamp=2018-08-06T19:24:47.690644Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:24:48.032791Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:24:56 http: TLS handshake error from 10.244.1.1:41368: EOF level=info timestamp=2018-08-06T19:24:58.504422Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:25:04.168671Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:25:06 http: TLS handshake error from 10.244.1.1:41374: EOF Pod name: virt-controller-67dcdd8464-2tf28 Pod phase: Running level=info timestamp=2018-08-06T19:20:21.506959Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T19:20:21.506980Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T19:20:21.506997Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T19:20:21.507014Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T19:20:21.507031Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-08-06T19:20:21.507106Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T19:20:21.510895Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-08-06T19:20:21.514840Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T19:20:21.515790Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T19:20:21.515659Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T19:23:17.997707Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:23:18.006580Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:23:18.271586Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmihbfpj\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmihbfpj" level=info timestamp=2018-08-06T19:24:18.385680Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:24:18.386773Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running level=info timestamp=2018-08-06T19:20:21.299031Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-cfch6 Pod phase: Running level=info timestamp=2018-08-06T19:24:35.805653Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind=Domain uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-08-06T19:24:35.849959Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-08-06T19:24:35.856805Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T19:24:35.857065Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiffxgs, existing: true\n" level=info timestamp=2018-08-06T19:24:35.857125Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-08-06T19:24:35.857198Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T19:24:35.857260Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T19:24:35.857379Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T19:24:35.895222Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T19:24:35.895566Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiffxgs, existing: true\n" level=info timestamp=2018-08-06T19:24:35.895631Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-08-06T19:24:35.895725Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T19:24:35.895773Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T19:24:35.895936Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Processing vmi update" level=info timestamp=2018-08-06T19:24:35.917844Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-pl2rj Pod phase: Running level=info timestamp=2018-08-06T19:20:21.258151Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T19:20:21.267308Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T19:20:21.268090Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-08-06T19:20:21.368131Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-08-06T19:20:21.467649Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-08-06T19:20:21.476602Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmiffxgs-wsr2v Pod phase: Running level=info timestamp=2018-08-06T19:24:35.439185Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T19:24:35.452415Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:24:35.770993Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T19:24:35.802047Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T19:24:35.806391Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:24:35.814049Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T19:24:35.837608Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T19:24:35.844435Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:24:35.846028Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T19:24:35.851011Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:24:35.898382Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-08-06T19:24:35.899595Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-08-06T19:24:35.899722Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-08-06T19:24:35.915429Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:24:36.454725Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 6c1e5042-6ab2-4251-8f32-e2737ea86d0b: 177" Pod name: virt-launcher-testvmihbfpj-844fb Pod phase: Running level=info timestamp=2018-08-06T19:23:39.502210Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:23:39.513906Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T19:23:39.537847Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T19:23:39.543664Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:23:39.546793Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T19:23:39.570638Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:23:39.641962Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-08-06T19:23:39.642872Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-08-06T19:23:39.642964Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-08-06T19:23:39.648882Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:23:39.653584Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-08-06T19:23:39.654165Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-08-06T19:23:39.654275Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-08-06T19:23:39.659957Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:23:40.170752Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 2f393421-2a40-4ddb-b2d7-739abe5104c3: 184" • Failure [119.705 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: : { Err: { s: "command terminated with exit code 126", }, Code: 126, } command terminated with exit code 126 not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:88 ------------------------------ level=info timestamp=2018-08-06T19:23:18.439542Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmihbfpj kind=VirtualMachineInstance uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmihbfpj-844fb" level=info timestamp=2018-08-06T19:23:39.236082Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmihbfpj kind=VirtualMachineInstance uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmihbfpj-844fb" level=info timestamp=2018-08-06T19:23:40.768163Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmihbfpj kind=VirtualMachineInstance uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T19:23:40.838978Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmihbfpj kind=VirtualMachineInstance uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." 2018/08/06 15:24:18 read closing down: EOF level=info timestamp=2018-08-06T19:24:18.703700Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmiffxgs kind=VirtualMachineInstance uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmiffxgs-wsr2v" level=info timestamp=2018-08-06T19:24:35.531537Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmiffxgs kind=VirtualMachineInstance uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmiffxgs-wsr2v" level=info timestamp=2018-08-06T19:24:37.008630Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmiffxgs kind=VirtualMachineInstance uid=502f9924-99ae-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T19:24:37.033452Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmiffxgs kind=VirtualMachineInstance uid=502f9924-99ae-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." 2018/08/06 15:25:16 read closing down: EOF STEP: have containerPort in the pod manifest STEP: start the virtual machine with slirp interface level=info timestamp=2018-08-06T19:25:17.157763Z pos=vmi_slirp_interface_test.go:87 component=tests msg= Pod name: disks-images-provider-dmzgd Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running level=info timestamp=2018-08-06T19:23:25.780984Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:23:35 http: TLS handshake error from 10.244.0.1:49836: EOF 2018/08/06 19:23:45 http: TLS handshake error from 10.244.0.1:49900: EOF 2018/08/06 19:23:55 http: TLS handshake error from 10.244.0.1:49960: EOF level=info timestamp=2018-08-06T19:23:55.802839Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:24:05 http: TLS handshake error from 10.244.0.1:50020: EOF 2018/08/06 19:24:15 http: TLS handshake error from 10.244.0.1:50080: EOF 2018/08/06 19:24:25 http: TLS handshake error from 10.244.0.1:50140: EOF level=info timestamp=2018-08-06T19:24:25.772941Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:24:35 http: TLS handshake error from 10.244.0.1:50200: EOF 2018/08/06 19:24:45 http: TLS handshake error from 10.244.0.1:50264: EOF 2018/08/06 19:24:55 http: TLS handshake error from 10.244.0.1:50324: EOF level=info timestamp=2018-08-06T19:24:55.785684Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:25:05 http: TLS handshake error from 10.244.0.1:50384: EOF 2018/08/06 19:25:15 http: TLS handshake error from 10.244.0.1:50444: EOF Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running level=info timestamp=2018-08-06T19:24:17.628488Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:24:17.971072Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:24:26 http: TLS handshake error from 10.244.1.1:41346: EOF level=info timestamp=2018-08-06T19:24:28.414476Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:24:34.008979Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:24:36.081396Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" 2018/08/06 19:24:36 http: TLS handshake error from 10.244.1.1:41356: EOF 2018/08/06 19:24:46 http: TLS handshake error from 10.244.1.1:41362: EOF level=info timestamp=2018-08-06T19:24:47.690644Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:24:48.032791Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:24:56 http: TLS handshake error from 10.244.1.1:41368: EOF level=info timestamp=2018-08-06T19:24:58.504422Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:25:04.168671Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:25:06 http: TLS handshake error from 10.244.1.1:41374: EOF 2018/08/06 19:25:16 http: TLS handshake error from 10.244.1.1:41384: EOF Pod name: virt-controller-67dcdd8464-2tf28 Pod phase: Running level=info timestamp=2018-08-06T19:20:21.506959Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T19:20:21.506980Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T19:20:21.506997Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T19:20:21.507014Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T19:20:21.507031Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer configMapInformer" level=info timestamp=2018-08-06T19:20:21.507106Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T19:20:21.510895Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." level=info timestamp=2018-08-06T19:20:21.514840Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T19:20:21.515790Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T19:20:21.515659Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T19:23:17.997707Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:23:18.006580Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:23:18.271586Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmihbfpj\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmihbfpj" level=info timestamp=2018-08-06T19:24:18.385680Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:24:18.386773Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running level=info timestamp=2018-08-06T19:20:21.299031Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-cfch6 Pod phase: Running level=info timestamp=2018-08-06T19:24:35.805653Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind=Domain uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-08-06T19:24:35.849959Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-08-06T19:24:35.856805Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T19:24:35.857065Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiffxgs, existing: true\n" level=info timestamp=2018-08-06T19:24:35.857125Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-08-06T19:24:35.857198Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T19:24:35.857260Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T19:24:35.857379Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T19:24:35.895222Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T19:24:35.895566Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiffxgs, existing: true\n" level=info timestamp=2018-08-06T19:24:35.895631Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-08-06T19:24:35.895725Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T19:24:35.895773Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T19:24:35.895936Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Processing vmi update" level=info timestamp=2018-08-06T19:24:35.917844Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-pl2rj Pod phase: Running level=info timestamp=2018-08-06T19:20:21.258151Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T19:20:21.267308Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T19:20:21.268090Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-08-06T19:20:21.368131Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-08-06T19:20:21.467649Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-08-06T19:20:21.476602Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmiffxgs-wsr2v Pod phase: Running level=info timestamp=2018-08-06T19:24:35.439185Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T19:24:35.452415Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:24:35.770993Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T19:24:35.802047Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T19:24:35.806391Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:24:35.814049Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T19:24:35.837608Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T19:24:35.844435Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:24:35.846028Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T19:24:35.851011Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:24:35.898382Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-08-06T19:24:35.899595Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-08-06T19:24:35.899722Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-08-06T19:24:35.915429Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiffxgs kind= uid=502f9924-99ae-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:24:36.454725Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 6c1e5042-6ab2-4251-8f32-e2737ea86d0b: 177" Pod name: virt-launcher-testvmihbfpj-844fb Pod phase: Running level=info timestamp=2018-08-06T19:23:39.502210Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:23:39.513906Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T19:23:39.537847Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T19:23:39.543664Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:23:39.546793Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T19:23:39.570638Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:23:39.641962Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-08-06T19:23:39.642872Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-08-06T19:23:39.642964Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-08-06T19:23:39.648882Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:23:39.653584Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-08-06T19:23:39.654165Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-08-06T19:23:39.654275Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-08-06T19:23:39.659957Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihbfpj kind= uid=2c22a9e4-99ae-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:23:40.170752Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 2f393421-2a40-4ddb-b2d7-739abe5104c3: 184" • Failure [0.732 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface with custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: : { Err: { s: "command terminated with exit code 126", }, Code: 126, } command terminated with exit code 126 not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:88 ------------------------------ STEP: have containerPort in the pod manifest STEP: start the virtual machine with slirp interface level=info timestamp=2018-08-06T19:25:18.018267Z pos=vmi_slirp_interface_test.go:87 component=tests msg= • [SLOW TEST:13.812 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:13.681 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:13.802 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:13.487 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:51.996 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:68 ------------------------------ • [SLOW TEST:64.843 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:78 ------------------------------ • [SLOW TEST:41.551 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:87 ------------------------------ • [SLOW TEST:18.376 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should wait until the virtual machine is in running state and return a stream interface /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:103 ------------------------------ • [SLOW TEST:30.244 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should fail waiting for the virtual machine instance to be running /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:111 ------------------------------ • [SLOW TEST:30.281 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should fail waiting for the expecter /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:134 ------------------------------ •••• Pod name: disks-images-provider-dmzgd Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running 2018/08/06 19:30:35 http: TLS handshake error from 10.244.0.1:52668: EOF 2018/08/06 19:30:45 http: TLS handshake error from 10.244.0.1:52728: EOF level=info timestamp=2018-08-06T19:30:47.375244Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" 2018/08/06 19:30:55 http: TLS handshake error from 10.244.0.1:52794: EOF level=info timestamp=2018-08-06T19:30:55.875480Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:31:05 http: TLS handshake error from 10.244.0.1:52854: EOF 2018/08/06 19:31:15 http: TLS handshake error from 10.244.0.1:52914: EOF level=error timestamp=2018-08-06T19:31:17.737658Z pos=subresource.go:85 component=virt-api msg= 2018/08/06 19:31:17 http: response.WriteHeader on hijacked connection level=error timestamp=2018-08-06T19:31:17.738564Z pos=subresource.go:97 component=virt-api reason="read tcp 10.244.0.9:8443->10.244.0.1:53302: use of closed network connection" msg="error ecountered reading from websocket stream" level=info timestamp=2018-08-06T19:31:17.738860Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmicm8wg/console proto=HTTP/1.1 statusCode=200 contentLength=0 2018/08/06 19:31:25 http: TLS handshake error from 10.244.0.1:52974: EOF level=info timestamp=2018-08-06T19:31:25.792814Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:31:35 http: TLS handshake error from 10.244.0.1:53034: EOF 2018/08/06 19:31:45 http: TLS handshake error from 10.244.0.1:53094: EOF Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running level=info timestamp=2018-08-06T19:30:59.676288Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:31:05.885924Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:31:06 http: TLS handshake error from 10.244.1.1:41634: EOF 2018/08/06 19:31:16 http: TLS handshake error from 10.244.1.1:41640: EOF level=info timestamp=2018-08-06T19:31:18.477897Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:31:18.775944Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:31:26 http: TLS handshake error from 10.244.1.1:41646: EOF level=info timestamp=2018-08-06T19:31:29.779956Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:31:32.307961Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T19:31:32.312249Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T19:31:36.027337Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:31:36 http: TLS handshake error from 10.244.1.1:41652: EOF 2018/08/06 19:31:46 http: TLS handshake error from 10.244.1.1:41658: EOF level=info timestamp=2018-08-06T19:31:48.526039Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:31:48.835628Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-67dcdd8464-2tf28 Pod phase: Running level=info timestamp=2018-08-06T19:27:05.364942Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6rmjm kind= uid=b3b83a0f-99ae-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:27:05.366237Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6rmjm kind= uid=b3b83a0f-99ae-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:27:05.474793Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6rmjm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6rmjm" level=info timestamp=2018-08-06T19:28:10.301006Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifq5qh kind= uid=da6c991d-99ae-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:28:10.302171Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifq5qh kind= uid=da6c991d-99ae-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:28:51.725290Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi584l6 kind= uid=f31dca9e-99ae-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:28:51.726198Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi584l6 kind= uid=f31dca9e-99ae-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:28:51.863940Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi584l6\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi584l6" level=info timestamp=2018-08-06T19:29:10.077633Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidxbf5 kind= uid=fe104873-99ae-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:29:10.078114Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidxbf5 kind= uid=fe104873-99ae-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:29:10.142189Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmidxbf5\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmidxbf5" level=info timestamp=2018-08-06T19:29:40.359864Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6v8ld kind= uid=1019386b-99af-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:29:40.361092Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6v8ld kind= uid=1019386b-99af-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:30:28.471619Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmicm8wg kind= uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:30:28.472800Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmicm8wg kind= uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running level=info timestamp=2018-08-06T19:20:21.299031Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-cfch6 Pod phase: Running level=info timestamp=2018-08-06T19:30:46.189243Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmicm8wg kind=Domain uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-08-06T19:30:46.229786Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-08-06T19:30:46.236731Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmicm8wg kind= uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T19:30:46.237184Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmicm8wg, existing: true\n" level=info timestamp=2018-08-06T19:30:46.237266Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-08-06T19:30:46.237406Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T19:30:46.237477Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T19:30:46.237895Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmicm8wg kind= uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T19:30:46.279224Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmicm8wg kind= uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T19:30:46.281577Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmicm8wg, existing: true\n" level=info timestamp=2018-08-06T19:30:46.281730Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-08-06T19:30:46.281806Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T19:30:46.281855Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T19:30:46.282078Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmicm8wg kind= uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Processing vmi update" level=info timestamp=2018-08-06T19:30:46.295163Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmicm8wg kind= uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-pl2rj Pod phase: Running level=info timestamp=2018-08-06T19:20:21.258151Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T19:20:21.267308Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T19:20:21.268090Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-08-06T19:20:21.368131Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-08-06T19:20:21.467649Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-08-06T19:20:21.476602Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-launcher-testvmicm8wg-259td Pod phase: Running level=info timestamp=2018-08-06T19:30:44.969681Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T19:30:45.733466Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T19:30:45.740696Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:30:45.741433Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID a446efcb-d4a6-42d5-ac3b-86fa0f75f503" level=info timestamp=2018-08-06T19:30:45.743060Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T19:30:46.167616Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T19:30:46.187395Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T19:30:46.189770Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:30:46.202735Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T19:30:46.217964Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmicm8wg kind= uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T19:30:46.225831Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T19:30:46.226000Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmicm8wg kind= uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:30:46.231072Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T19:30:46.294549Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmicm8wg kind= uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T19:30:46.752935Z pos=monitor.go:222 component=virt-launcher msg="Found PID for a446efcb-d4a6-42d5-ac3b-86fa0f75f503: 178" ------------------------------ • Failure [85.380 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 Timed out after 40.011s. Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:85 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-08-06T19:30:28.769038Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmicm8wg kind=VirtualMachineInstance uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmicm8wg-259td" level=info timestamp=2018-08-06T19:30:45.820375Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmicm8wg kind=VirtualMachineInstance uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmicm8wg-259td" level=info timestamp=2018-08-06T19:30:47.379093Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmicm8wg kind=VirtualMachineInstance uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T19:30:47.396964Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmicm8wg kind=VirtualMachineInstance uid=2cc77a89-99af-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." STEP: Expecting the VirtualMachineInstance console STEP: Killing the watchdog device STEP: Checking that the VirtualMachineInstance has Failed status 2018/08/06 15:31:53 read closing down: EOF • [SLOW TEST:152.853 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:18.778 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:22.970 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • [SLOW TEST:20.432 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ •••• ------------------------------ • [SLOW TEST:8.072 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:20.670 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ ••• ------------------------------ • [SLOW TEST:22.861 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove the finished VM /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:279 ------------------------------ • ------------------------------ • [SLOW TEST:19.733 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:80 ------------------------------ • [SLOW TEST:20.454 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:86 ------------------------------ •••• ------------------------------ • [SLOW TEST:41.539 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:174 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:29.367 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:174 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:17.344 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:205 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:206 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:207 ------------------------------ • [SLOW TEST:18.539 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:205 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:206 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:237 ------------------------------ • [SLOW TEST:50.492 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:285 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:286 ------------------------------ Pod name: disks-images-provider-dmzgd Pod phase: Running Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running 2018/08/06 19:55:35 http: TLS handshake error from 10.244.0.1:33836: EOF 2018/08/06 19:55:45 http: TLS handshake error from 10.244.0.1:33902: EOF 2018/08/06 19:55:55 http: TLS handshake error from 10.244.0.1:33968: EOF level=info timestamp=2018-08-06T19:55:55.958868Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 19:56:05 http: TLS handshake error from 10.244.0.1:34034: EOF level=info timestamp=2018-08-06T19:56:11.804308Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:56:15 http: TLS handshake error from 10.244.0.1:34102: EOF level=info timestamp=2018-08-06T19:56:21.165238Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:56:25 http: TLS handshake error from 10.244.0.1:34168: EOF level=info timestamp=2018-08-06T19:56:25.939840Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T19:56:31.535141Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:56:35 http: TLS handshake error from 10.244.0.1:34234: EOF level=info timestamp=2018-08-06T19:56:40.936637Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:56:41.959198Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:56:45 http: TLS handshake error from 10.244.0.1:34300: EOF Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running Pod name: virt-controller-67dcdd8464-2tf28 Pod phase: Running level=info timestamp=2018-08-06T19:37:18.990663Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiw2tsj kind= uid=21799041-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:37:19.123821Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiw2tsj\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiw2tsj" level=info timestamp=2018-08-06T19:37:19.376501Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiw2tsj\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiw2tsj" level=info timestamp=2018-08-06T19:38:00.498468Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi44mpv kind= uid=3a34615c-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:38:00.501771Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi44mpv kind= uid=3a34615c-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:38:00.644973Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi44mpv\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi44mpv" level=info timestamp=2018-08-06T19:38:29.926871Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirjhg2 kind= uid=4bc0bc02-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:38:29.930817Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirjhg2 kind= uid=4bc0bc02-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:38:47.203820Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizbhsc kind= uid=560d0831-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:38:47.204216Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizbhsc kind= uid=560d0831-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:38:47.316153Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmizbhsc\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmizbhsc" level=info timestamp=2018-08-06T19:39:05.765082Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqqchf kind= uid=611df4fb-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:39:05.766036Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqqchf kind= uid=611df4fb-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:39:56.319647Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi627mf kind= uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:39:56.320231Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi627mf kind= uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running Pod name: virt-handler-cfch6 Pod phase: Running Pod name: virt-handler-pl2rj Pod phase: Running level=error timestamp=2018-08-06T19:38:50.921643Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmiw2tsj kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-08-06T19:38:50.925612Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmiw2tsj" level=info timestamp=2018-08-06T19:38:51.269890Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmiw2tsj kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-08-06T19:38:51.270550Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiw2tsj, existing: false\n" level=info timestamp=2018-08-06T19:38:51.270659Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T19:38:51.270776Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiw2tsj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T19:38:51.272925Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiw2tsj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T19:38:51.273171Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiw2tsj, existing: false\n" level=info timestamp=2018-08-06T19:38:51.273243Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T19:38:51.273714Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiw2tsj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T19:38:51.273969Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiw2tsj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T19:39:11.407007Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiw2tsj, existing: false\n" level=info timestamp=2018-08-06T19:39:11.407874Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T19:39:11.408250Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiw2tsj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T19:39:11.408676Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiw2tsj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmi627mf-f98ps Pod phase: Running Pod name: vmi-killerdrgls Pod phase: Pending • Failure [1026.808 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:309 should recover and continue management [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:310 Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:336 ------------------------------ level=info timestamp=2018-08-06T19:39:56.674346Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi627mf kind=VirtualMachineInstance uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi627mf-f98ps" level=info timestamp=2018-08-06T19:40:14.649167Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi627mf kind=VirtualMachineInstance uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi627mf-f98ps" level=info timestamp=2018-08-06T19:40:16.454129Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi627mf kind=VirtualMachineInstance uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T19:40:16.474397Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi627mf kind=VirtualMachineInstance uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." STEP: Crashing the virt-handler STEP: Killing the VirtualMachineInstance level=info timestamp=2018-08-06T19:40:21.248359Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmi627mf kind=VirtualMachineInstance uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi627mf-f98ps" level=info timestamp=2018-08-06T19:40:21.248544Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmi627mf kind=VirtualMachineInstance uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi627mf-f98ps" level=info timestamp=2018-08-06T19:40:21.249204Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmi627mf kind=VirtualMachineInstance uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T19:40:21.249389Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmi627mf kind=VirtualMachineInstance uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." level=info timestamp=2018-08-06T19:40:22.170145Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmi627mf kind=VirtualMachineInstance uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." STEP: Checking that VirtualMachineInstance has 'Failed' phase • [SLOW TEST:78.955 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:340 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:341 ------------------------------ Pod name: disks-images-provider-dmzgd Pod phase: Running Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-2j49h Pod phase: Running 2018/08/06 19:57:57 http: TLS handshake error from 10.244.0.1:58414: EOF 2018/08/06 19:58:07 http: TLS handshake error from 10.244.0.1:58480: EOF 2018/08/06 19:58:17 http: TLS handshake error from 10.244.0.1:58546: EOF 2018/08/06 19:58:27 http: TLS handshake error from 10.244.0.1:58614: EOF 2018/08/06 19:58:37 http: TLS handshake error from 10.244.0.1:58680: EOF 2018/08/06 19:58:47 http: TLS handshake error from 10.244.0.1:58746: EOF 2018/08/06 19:58:57 http: TLS handshake error from 10.244.0.1:58812: EOF 2018/08/06 19:59:07 http: TLS handshake error from 10.244.0.1:58878: EOF 2018/08/06 19:59:17 http: TLS handshake error from 10.244.0.1:58944: EOF 2018/08/06 19:59:27 http: TLS handshake error from 10.244.0.1:59010: EOF 2018/08/06 19:59:37 http: TLS handshake error from 10.244.0.1:59076: EOF 2018/08/06 19:59:47 http: TLS handshake error from 10.244.0.1:59142: EOF 2018/08/06 19:59:57 http: TLS handshake error from 10.244.0.1:59208: EOF 2018/08/06 20:00:07 http: TLS handshake error from 10.244.0.1:59274: EOF 2018/08/06 20:00:17 http: TLS handshake error from 10.244.0.1:59340: EOF Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running level=info timestamp=2018-08-06T19:59:25.868450Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T19:59:31.963019Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:59:35 http: TLS handshake error from 10.244.0.1:35436: EOF level=info timestamp=2018-08-06T19:59:41.829294Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T19:59:42.899639Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:59:45 http: TLS handshake error from 10.244.0.1:35502: EOF level=info timestamp=2018-08-06T19:59:52.020881Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 19:59:55 http: TLS handshake error from 10.244.0.1:35568: EOF level=info timestamp=2018-08-06T19:59:55.843787Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:00:02.019315Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:00:05 http: TLS handshake error from 10.244.0.1:35634: EOF level=info timestamp=2018-08-06T20:00:11.934296Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:00:13.053870Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:00:15 http: TLS handshake error from 10.244.0.1:35700: EOF level=info timestamp=2018-08-06T20:00:22.108385Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running Pod name: virt-controller-67dcdd8464-2tf28 Pod phase: Running level=info timestamp=2018-08-06T19:38:00.498468Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi44mpv kind= uid=3a34615c-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:38:00.501771Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi44mpv kind= uid=3a34615c-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:38:00.644973Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi44mpv\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi44mpv" level=info timestamp=2018-08-06T19:38:29.926871Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirjhg2 kind= uid=4bc0bc02-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:38:29.930817Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirjhg2 kind= uid=4bc0bc02-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:38:47.203820Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizbhsc kind= uid=560d0831-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:38:47.204216Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizbhsc kind= uid=560d0831-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:38:47.316153Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmizbhsc\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmizbhsc" level=info timestamp=2018-08-06T19:39:05.765082Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqqchf kind= uid=611df4fb-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:39:05.766036Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqqchf kind= uid=611df4fb-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:39:56.319647Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi627mf kind= uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:39:56.320231Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi627mf kind= uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:58:22.045284Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixwbr6 kind= uid=12484237-99b3-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:58:22.054548Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixwbr6 kind= uid=12484237-99b3-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:59:21.836252Z pos=node.go:194 component=virt-controller service=http name=node01 kind= uid=b0219630-99ab-11e8-aca8-525500d15501 msg="Moving vmi testvmixwbr6 in namespace kubevirt-test-default on unresponsive node to failed state" Pod name: virt-controller-67dcdd8464-kkfsb Pod phase: Running level=info timestamp=2018-08-06T19:45:25.674661Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running Pod name: virt-handler-cfch6 Pod phase: Running Pod name: virt-launcher-testvmi627mf-f98ps Pod phase: Running Pod name: vmi-killerdrgls Pod phase: Pending • Failure in Spec Teardown (AfterEach) [137.456 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:371 the node controller should react [AfterEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:410 Timed out after 60.000s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:459 ------------------------------ level=info timestamp=2018-08-06T19:58:22.426078Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmixwbr6 kind=VirtualMachineInstance uid=12484237-99b3-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmixwbr6-wnck4" level=info timestamp=2018-08-06T19:58:40.173513Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmixwbr6 kind=VirtualMachineInstance uid=12484237-99b3-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmixwbr6-wnck4" level=info timestamp=2018-08-06T19:58:41.997754Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmixwbr6 kind=VirtualMachineInstance uid=12484237-99b3-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T19:58:42.009190Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmixwbr6 kind=VirtualMachineInstance uid=12484237-99b3-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." STEP: marking the node as not schedulable STEP: moving stuck vmis to failed state Pod name: disks-images-provider-dmzgd Pod phase: Running Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-2j49h Pod phase: Running 2018/08/06 20:01:17 http: TLS handshake error from 10.244.0.1:59748: EOF 2018/08/06 20:01:27 http: TLS handshake error from 10.244.0.1:59814: EOF 2018/08/06 20:01:37 http: TLS handshake error from 10.244.0.1:59880: EOF 2018/08/06 20:01:47 http: TLS handshake error from 10.244.0.1:59946: EOF 2018/08/06 20:01:57 http: TLS handshake error from 10.244.0.1:60012: EOF 2018/08/06 20:02:07 http: TLS handshake error from 10.244.0.1:60078: EOF 2018/08/06 20:02:17 http: TLS handshake error from 10.244.0.1:60144: EOF 2018/08/06 20:02:27 http: TLS handshake error from 10.244.0.1:60210: EOF 2018/08/06 20:02:37 http: TLS handshake error from 10.244.0.1:60276: EOF 2018/08/06 20:02:47 http: TLS handshake error from 10.244.0.1:60342: EOF 2018/08/06 20:02:57 http: TLS handshake error from 10.244.0.1:60408: EOF 2018/08/06 20:03:07 http: TLS handshake error from 10.244.0.1:60474: EOF 2018/08/06 20:03:17 http: TLS handshake error from 10.244.0.1:60540: EOF 2018/08/06 20:03:27 http: TLS handshake error from 10.244.0.1:60606: EOF 2018/08/06 20:03:37 http: TLS handshake error from 10.244.0.1:60672: EOF Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running level=info timestamp=2018-08-06T20:02:52.459822Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:02:55 http: TLS handshake error from 10.244.0.1:36768: EOF level=info timestamp=2018-08-06T20:02:55.863975Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:03:02.497717Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:03:05 http: TLS handshake error from 10.244.0.1:36834: EOF level=info timestamp=2018-08-06T20:03:12.555698Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:03:13.939272Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:03:15 http: TLS handshake error from 10.244.0.1:36900: EOF level=info timestamp=2018-08-06T20:03:22.556191Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:03:25 http: TLS handshake error from 10.244.0.1:36966: EOF level=info timestamp=2018-08-06T20:03:26.003027Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:03:32.543662Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:03:35 http: TLS handshake error from 10.244.0.1:37032: EOF level=info timestamp=2018-08-06T20:03:35.307876Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:03:35.311061Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running Pod name: virt-controller-67dcdd8464-2tf28 Pod phase: Running level=info timestamp=2018-08-06T19:38:29.926871Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirjhg2 kind= uid=4bc0bc02-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:38:29.930817Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirjhg2 kind= uid=4bc0bc02-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:38:47.203820Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizbhsc kind= uid=560d0831-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:38:47.204216Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizbhsc kind= uid=560d0831-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:38:47.316153Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmizbhsc\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmizbhsc" level=info timestamp=2018-08-06T19:39:05.765082Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqqchf kind= uid=611df4fb-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:39:05.766036Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiqqchf kind= uid=611df4fb-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:39:56.319647Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi627mf kind= uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:39:56.320231Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi627mf kind= uid=7f3b1e1f-99b0-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:58:22.045284Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixwbr6 kind= uid=12484237-99b3-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T19:58:22.054548Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixwbr6 kind= uid=12484237-99b3-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T19:59:21.836252Z pos=node.go:194 component=virt-controller service=http name=node01 kind= uid=b0219630-99ab-11e8-aca8-525500d15501 msg="Moving vmi testvmixwbr6 in namespace kubevirt-test-default on unresponsive node to failed state" level=info timestamp=2018-08-06T20:00:39.583739Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5zvvm kind= uid=6447b5ee-99b3-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:00:39.584535Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5zvvm kind= uid=6447b5ee-99b3-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:00:39.603687Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixwbr6\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmixwbr6, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 12484237-99b3-11e8-aca8-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixwbr6" Pod name: virt-controller-67dcdd8464-kkfsb Pod phase: Running level=info timestamp=2018-08-06T19:45:25.674661Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running Pod name: virt-handler-cfch6 Pod phase: Running Pod name: virt-launcher-testvmi5zvvm-j24b7 Pod phase: Pending Pod name: virt-launcher-testvmi627mf-f98ps Pod phase: Running Pod name: vmi-killerdrgls Pod phase: Pending • Failure [195.859 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with node tainted /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:463 the vmi with tolerations should be scheduled [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:485 Timed out after 90.014s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1091 ------------------------------ level=info timestamp=2018-08-06T20:00:40.220023Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi5zvvm kind=VirtualMachineInstance uid=6447b5ee-99b3-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi5zvvm-j24b7" • [SLOW TEST:5.716 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with node tainted /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:463 the vmi without tolerations should not be scheduled /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:506 ------------------------------ • [SLOW TEST:73.304 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:75.189 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.279 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:604 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.276 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:641 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.242 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:74 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:685 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ •••• ------------------------------ • [SLOW TEST:72.649 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:837 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:838 ------------------------------ • [SLOW TEST:52.899 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:870 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:871 ------------------------------ Pod name: disks-images-provider-dmzgd Pod phase: Running Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-2j49h Pod phase: Running 2018/08/06 20:07:57 http: TLS handshake error from 10.244.0.1:34170: EOF 2018/08/06 20:08:07 http: TLS handshake error from 10.244.0.1:34236: EOF 2018/08/06 20:08:17 http: TLS handshake error from 10.244.0.1:34302: EOF 2018/08/06 20:08:27 http: TLS handshake error from 10.244.0.1:34368: EOF 2018/08/06 20:08:37 http: TLS handshake error from 10.244.0.1:34434: EOF 2018/08/06 20:08:47 http: TLS handshake error from 10.244.0.1:34500: EOF level=info timestamp=2018-08-06T20:08:56.434162Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" 2018/08/06 20:08:57 http: TLS handshake error from 10.244.0.1:34574: EOF 2018/08/06 20:09:07 http: TLS handshake error from 10.244.0.1:34640: EOF 2018/08/06 20:09:17 http: TLS handshake error from 10.244.0.1:34706: EOF 2018/08/06 20:09:27 http: TLS handshake error from 10.244.0.1:34772: EOF level=error timestamp=2018-08-06T20:09:29.964897Z pos=subresource.go:85 component=virt-api msg= 2018/08/06 20:09:29 http: response.WriteHeader on hijacked connection level=error timestamp=2018-08-06T20:09:29.969752Z pos=subresource.go:97 component=virt-api reason="read tcp 10.244.0.20:8443->10.244.0.1:39726: use of closed network connection" msg="error ecountered reading from websocket stream" level=info timestamp=2018-08-06T20:09:29.970164Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmim7622/console proto=HTTP/1.1 statusCode=200 contentLength=0 Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running 2018/08/06 20:08:45 http: TLS handshake error from 10.244.0.1:39092: EOF level=info timestamp=2018-08-06T20:08:45.737305Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:08:53.272613Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:08:55 http: TLS handshake error from 10.244.0.1:39158: EOF level=info timestamp=2018-08-06T20:08:56.071195Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:09:03.410810Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:09:05 http: TLS handshake error from 10.244.0.1:39232: EOF level=info timestamp=2018-08-06T20:09:13.799062Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:09:15 http: TLS handshake error from 10.244.0.1:39298: EOF level=info timestamp=2018-08-06T20:09:16.193440Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:09:23.342808Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:09:25 http: TLS handshake error from 10.244.0.1:39364: EOF level=info timestamp=2018-08-06T20:09:26.400221Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:09:33.468219Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:09:35 http: TLS handshake error from 10.244.0.1:39432: EOF Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running Pod name: virt-controller-67dcdd8464-2tf28 Pod phase: Running level=info timestamp=2018-08-06T20:06:30.824609Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijpz8f kind= uid=35a6e03e-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:06:30.824816Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijpz8f kind= uid=35a6e03e-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:06:30.936276Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijpz8f\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmijpz8f" level=info timestamp=2018-08-06T20:06:31.053735Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijpz8f\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmijpz8f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 35a6e03e-99b4-11e8-aca8-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmijpz8f" level=info timestamp=2018-08-06T20:06:31.822938Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmin86vk kind= uid=363f1216-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:06:31.823300Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmin86vk kind= uid=363f1216-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:06:31.970045Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmin86vk\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmin86vk" level=info timestamp=2018-08-06T20:06:32.013217Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmin86vk\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmin86vk" level=info timestamp=2018-08-06T20:07:44.432675Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmin86vk\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmin86vk, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 363f1216-99b4-11e8-aca8-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmin86vk" level=info timestamp=2018-08-06T20:07:44.585064Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigr8gg kind= uid=619ad3b9-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:07:44.585745Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigr8gg kind= uid=619ad3b9-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:07:44.994026Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmigr8gg\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmigr8gg" level=info timestamp=2018-08-06T20:07:45.417276Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmigr8gg\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmigr8gg" level=info timestamp=2018-08-06T20:08:37.477818Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:08:37.481831Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-kkfsb Pod phase: Running level=info timestamp=2018-08-06T19:45:25.674661Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running Pod name: virt-handler-cfch6 Pod phase: Running Pod name: virt-handler-ksc2j Pod phase: Running level=info timestamp=2018-08-06T20:09:29.253114Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T20:09:29.253215Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T20:09:29.253787Z pos=vm.go:370 component=virt-handler namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-08-06T20:09:29.253974Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Processing shutdown." level=info timestamp=2018-08-06T20:09:29.275078Z pos=vm.go:556 component=virt-handler namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Grace period expired, killing deleted VirtualMachineInstance testvmim7622" level=error timestamp=2018-08-06T20:09:44.381790Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 reason="server error. command Launcher.Kill failed: virError(Code=38, Domain=0, Message='Failed to terminate process 177 with SIGTERM: Device or resource busy')" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-08-06T20:09:44.460581Z pos=vm.go:251 component=virt-handler reason="server error. command Launcher.Kill failed: virError(Code=38, Domain=0, Message='Failed to terminate process 177 with SIGTERM: Device or resource busy')" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmim7622" level=info timestamp=2018-08-06T20:09:44.461814Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmim7622, existing: true\n" level=info timestamp=2018-08-06T20:09:44.462195Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-08-06T20:09:44.462643Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T20:09:44.463005Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T20:09:44.463290Z pos=vm.go:344 component=virt-handler namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Shutting down due to graceful shutdown signal." level=info timestamp=2018-08-06T20:09:44.463719Z pos=vm.go:370 component=virt-handler namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-08-06T20:09:44.463803Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Processing shutdown." level=info timestamp=2018-08-06T20:09:44.466552Z pos=vm.go:556 component=virt-handler namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Grace period expired, killing deleted VirtualMachineInstance testvmim7622" Pod name: virt-launcher-testvmi627mf-f98ps Pod phase: Running Pod name: virt-launcher-testvmim7622-7b252 Pod phase: Running level=info timestamp=2018-08-06T20:08:56.119636Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T20:08:56.143014Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:08:56.146486Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:08:56.153963Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T20:08:56.166426Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T20:08:56.170137Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:08:56.174531Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:08:56.177433Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:08:56.225106Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:08:56.476636Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 495a8ccd-6bb6-4b1b-b9ef-092a53dd6013: 177" level=info timestamp=2018-08-06T20:09:29.279408Z pos=monitor.go:266 component=virt-launcher msg="Received signal 15." level=error timestamp=2018-08-06T20:09:44.376250Z pos=manager.go:299 component=virt-launcher namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 reason="virError(Code=38, Domain=0, Message='Failed to terminate process 177 with SIGTERM: Device or resource busy')" msg="Destroying the domain state failed." level=error timestamp=2018-08-06T20:09:44.377697Z pos=server.go:90 component=virt-launcher namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 reason="virError(Code=38, Domain=0, Message='Failed to terminate process 177 with SIGTERM: Device or resource busy')" msg="Failed to kill vmi" level=info timestamp=2018-08-06T20:09:44.472093Z pos=client.go:136 component=virt-launcher msg="Libvirt event 6 with reason 2 received" level=info timestamp=2018-08-06T20:09:45.453376Z pos=monitor.go:238 component=virt-launcher msg="Grace Period expired, shutting down." Pod name: vmi-killerdrgls Pod phase: Pending • Failure [75.224 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with ACPI and 0 grace period seconds /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:895 should result in vmi status failed [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:896 Timed out after 5.000s. Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:917 ------------------------------ STEP: Creating the VirtualMachineInstance level=info timestamp=2018-08-06T20:08:37.801977Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmim7622 kind=VirtualMachineInstance uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmim7622-7b252" level=info timestamp=2018-08-06T20:08:54.483852Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmim7622 kind=VirtualMachineInstance uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmim7622-7b252" level=info timestamp=2018-08-06T20:08:56.297968Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmim7622 kind=VirtualMachineInstance uid=81224a97-99b4-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T20:08:56.328387Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmim7622 kind=VirtualMachineInstance uid=81224a97-99b4-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." STEP: Deleting the VirtualMachineInstance 2018/08/06 16:09:29 read closing down: EOF STEP: Verifying VirtualMachineInstance's status is Failed Pod name: disks-images-provider-dmzgd Pod phase: Running Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-2j49h Pod phase: Running 2018/08/06 20:09:37 http: TLS handshake error from 10.244.0.1:34842: EOF 2018/08/06 20:09:47 http: TLS handshake error from 10.244.0.1:34914: EOF 2018/08/06 20:09:57 http: TLS handshake error from 10.244.0.1:34982: EOF 2018/08/06 20:10:07 http: TLS handshake error from 10.244.0.1:35048: EOF level=info timestamp=2018-08-06T20:10:14.924929Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" 2018/08/06 20:10:17 http: TLS handshake error from 10.244.0.1:35122: EOF 2018/08/06 20:10:27 http: TLS handshake error from 10.244.0.1:35188: EOF 2018/08/06 20:10:37 http: TLS handshake error from 10.244.0.1:35254: EOF level=error timestamp=2018-08-06T20:10:47.420482Z pos=subresource.go:97 component=virt-api reason="websocket: close 1006 (abnormal closure): unexpected EOF" msg="error ecountered reading from websocket stream" level=error timestamp=2018-08-06T20:10:47.420829Z pos=subresource.go:106 component=virt-api reason="websocket: close 1006 (abnormal closure): unexpected EOF" msg="Error in websocket proxy" 2018/08/06 20:10:47 http: response.WriteHeader on hijacked connection level=info timestamp=2018-08-06T20:10:47.421112Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmi5pwbb/console proto=HTTP/1.1 statusCode=500 contentLength=0 level=error timestamp=2018-08-06T20:10:47.423987Z pos=subresource.go:91 component=virt-api reason="write tcp 10.244.0.20:8443->10.244.0.1:40268: write: broken pipe" msg="error ecountered reading from remote podExec stream" 2018/08/06 20:10:47 http: TLS handshake error from 10.244.0.1:35320: EOF 2018/08/06 20:10:57 http: TLS handshake error from 10.244.0.1:35386: EOF Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running level=info timestamp=2018-08-06T20:10:23.475289Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:10:25 http: TLS handshake error from 10.244.0.1:39780: EOF level=info timestamp=2018-08-06T20:10:26.171894Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:10:33.593882Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:10:34.501503Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:10:34.518430Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 20:10:35 http: TLS handshake error from 10.244.0.1:39846: EOF level=info timestamp=2018-08-06T20:10:44.105810Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:10:45 http: TLS handshake error from 10.244.0.1:39912: EOF level=info timestamp=2018-08-06T20:10:46.633848Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:10:53.541840Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:10:55 http: TLS handshake error from 10.244.0.1:39978: EOF level=info timestamp=2018-08-06T20:10:56.026999Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:11:03.660303Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:11:05 http: TLS handshake error from 10.244.0.1:40046: EOF Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running Pod name: virt-controller-67dcdd8464-2tf28 Pod phase: Running level=info timestamp=2018-08-06T20:06:31.822938Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmin86vk kind= uid=363f1216-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:06:31.823300Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmin86vk kind= uid=363f1216-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:06:31.970045Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmin86vk\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmin86vk" level=info timestamp=2018-08-06T20:06:32.013217Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmin86vk\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmin86vk" level=info timestamp=2018-08-06T20:07:44.432675Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmin86vk\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmin86vk, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 363f1216-99b4-11e8-aca8-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmin86vk" level=info timestamp=2018-08-06T20:07:44.585064Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigr8gg kind= uid=619ad3b9-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:07:44.585745Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigr8gg kind= uid=619ad3b9-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:07:44.994026Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmigr8gg\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmigr8gg" level=info timestamp=2018-08-06T20:07:45.417276Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmigr8gg\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmigr8gg" level=info timestamp=2018-08-06T20:08:37.477818Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:08:37.481831Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:09:52.764840Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:09:52.773891Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:09:52.968943Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi5pwbb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi5pwbb" level=info timestamp=2018-08-06T20:09:53.073877Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi5pwbb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi5pwbb" Pod name: virt-controller-67dcdd8464-kkfsb Pod phase: Running level=info timestamp=2018-08-06T19:45:25.674661Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running Pod name: virt-handler-cfch6 Pod phase: Running Pod name: virt-handler-ksc2j Pod phase: Running level=info timestamp=2018-08-06T20:10:47.591800Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T20:10:47.591860Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T20:10:47.592065Z pos=vm.go:344 component=virt-handler namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Shutting down due to graceful shutdown signal." level=info timestamp=2018-08-06T20:10:47.592159Z pos=vm.go:370 component=virt-handler namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-08-06T20:10:47.592281Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Processing shutdown." level=info timestamp=2018-08-06T20:10:47.618251Z pos=vm.go:547 component=virt-handler namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Signaled graceful shutdown for testvmi5pwbb" level=info timestamp=2018-08-06T20:10:47.618599Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:10:57.619279Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi5pwbb, existing: true\n" level=info timestamp=2018-08-06T20:10:57.619656Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-08-06T20:10:57.619773Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T20:10:57.619851Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T20:10:57.620166Z pos=vm.go:344 component=virt-handler namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Shutting down due to graceful shutdown signal." level=info timestamp=2018-08-06T20:10:57.620292Z pos=vm.go:370 component=virt-handler namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-08-06T20:10:57.620440Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Processing shutdown." level=info timestamp=2018-08-06T20:10:57.622204Z pos=vm.go:556 component=virt-handler namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Grace period expired, killing deleted VirtualMachineInstance testvmi5pwbb" Pod name: virt-launcher-testvmi5pwbb-6qprd Pod phase: Running level=info timestamp=2018-08-06T20:10:14.635820Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:10:14.639276Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:10:14.643815Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:10:14.700837Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:10:14.899457Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 214a6385-cdb6-4d60-a1fe-b41af61a5136: 183" level=info timestamp=2018-08-06T20:10:47.261314Z pos=monitor.go:266 component=virt-launcher msg="Received signal 15." level=info timestamp=2018-08-06T20:10:47.328067Z pos=manager.go:255 component=virt-launcher namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Signaled graceful shutdown for testvmi5pwbb" level=info timestamp=2018-08-06T20:10:47.542282Z pos=server.go:118 component=virt-launcher namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Signaled vmi shutdown" level=info timestamp=2018-08-06T20:10:47.546830Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 1 received" level=info timestamp=2018-08-06T20:10:47.583806Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:10:47.588212Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:10:47.590407Z pos=server.go:118 component=virt-launcher namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Signaled vmi shutdown" level=info timestamp=2018-08-06T20:10:47.617252Z pos=server.go:118 component=virt-launcher namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Signaled vmi shutdown" level=info timestamp=2018-08-06T20:10:57.627785Z pos=client.go:136 component=virt-launcher msg="Libvirt event 6 with reason 1 received" level=info timestamp=2018-08-06T20:11:13.874581Z pos=monitor.go:238 component=virt-launcher msg="Grace Period expired, shutting down." Pod name: virt-launcher-testvmi627mf-f98ps Pod phase: Running Pod name: vmi-killerdrgls Pod phase: Pending • Failure [87.949 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with ACPI and some grace period seconds /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:920 should result in vmi status succeeded [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:921 Timed out after 15.000s. Expected : Running to equal : Succeeded /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:942 ------------------------------ STEP: Creating the VirtualMachineInstance level=info timestamp=2018-08-06T20:09:53.237217Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi5pwbb kind=VirtualMachineInstance uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi5pwbb-6qprd" level=info timestamp=2018-08-06T20:10:12.769749Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi5pwbb kind=VirtualMachineInstance uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi5pwbb-6qprd" level=info timestamp=2018-08-06T20:10:14.761162Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi5pwbb kind=VirtualMachineInstance uid=adffc692-99b4-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T20:10:14.828684Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi5pwbb kind=VirtualMachineInstance uid=adffc692-99b4-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." STEP: Deleting the VirtualMachineInstance 2018/08/06 16:10:47 read closing down: EOF STEP: Verifying VirtualMachineInstance's status is Succeeded • [SLOW TEST:59.550 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:945 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:946 ------------------------------ Pod name: disks-images-provider-dmzgd Pod phase: Running Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-2j49h Pod phase: Running 2018/08/06 20:11:27 http: TLS handshake error from 10.244.0.1:35596: EOF 2018/08/06 20:11:37 http: TLS handshake error from 10.244.0.1:35662: EOF 2018/08/06 20:11:47 http: TLS handshake error from 10.244.0.1:35728: EOF 2018/08/06 20:11:57 http: TLS handshake error from 10.244.0.1:35794: EOF 2018/08/06 20:12:07 http: TLS handshake error from 10.244.0.1:35860: EOF 2018/08/06 20:12:17 http: TLS handshake error from 10.244.0.1:35926: EOF 2018/08/06 20:12:27 http: TLS handshake error from 10.244.0.1:35992: EOF 2018/08/06 20:12:37 http: TLS handshake error from 10.244.0.1:36058: EOF 2018/08/06 20:12:47 http: TLS handshake error from 10.244.0.1:36124: EOF 2018/08/06 20:12:57 http: TLS handshake error from 10.244.0.1:36190: EOF 2018/08/06 20:13:07 http: TLS handshake error from 10.244.0.1:36256: EOF 2018/08/06 20:13:17 http: TLS handshake error from 10.244.0.1:36322: EOF 2018/08/06 20:13:27 http: TLS handshake error from 10.244.0.1:36388: EOF 2018/08/06 20:13:37 http: TLS handshake error from 10.244.0.1:36454: EOF 2018/08/06 20:13:47 http: TLS handshake error from 10.244.0.1:36520: EOF Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running level=info timestamp=2018-08-06T20:13:03.916812Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:13:05 http: TLS handshake error from 10.244.0.1:40848: EOF level=info timestamp=2018-08-06T20:13:14.747529Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:13:15 http: TLS handshake error from 10.244.0.1:40914: EOF level=info timestamp=2018-08-06T20:13:17.488245Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:13:23.840077Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:13:25 http: TLS handshake error from 10.244.0.1:40980: EOF level=info timestamp=2018-08-06T20:13:25.968902Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:13:34.000371Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:13:34.537128Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:13:34.540165Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 20:13:35 http: TLS handshake error from 10.244.0.1:41046: EOF level=info timestamp=2018-08-06T20:13:44.858052Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:13:45 http: TLS handshake error from 10.244.0.1:41112: EOF level=info timestamp=2018-08-06T20:13:47.639378Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running Pod name: virt-controller-67dcdd8464-2tf28 Pod phase: Running level=info timestamp=2018-08-06T20:07:44.585745Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigr8gg kind= uid=619ad3b9-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:07:44.994026Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmigr8gg\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmigr8gg" level=info timestamp=2018-08-06T20:07:45.417276Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmigr8gg\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmigr8gg" level=info timestamp=2018-08-06T20:08:37.477818Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:08:37.481831Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmim7622 kind= uid=81224a97-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:09:52.764840Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:09:52.773891Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5pwbb kind= uid=adffc692-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:09:52.968943Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi5pwbb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi5pwbb" level=info timestamp=2018-08-06T20:09:53.073877Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi5pwbb\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi5pwbb" level=info timestamp=2018-08-06T20:11:20.772998Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijv2vn kind= uid=e2770c2f-99b4-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:11:20.777253Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijv2vn kind= uid=e2770c2f-99b4-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:11:21.004040Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijv2vn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmijv2vn" level=info timestamp=2018-08-06T20:12:20.225175Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifxwwf kind= uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:12:20.228813Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmifxwwf kind= uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:12:20.318075Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmifxwwf\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmifxwwf" Pod name: virt-controller-67dcdd8464-kkfsb Pod phase: Running level=info timestamp=2018-08-06T19:45:25.674661Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running Pod name: virt-handler-cfch6 Pod phase: Running Pod name: virt-handler-ksc2j Pod phase: Running level=info timestamp=2018-08-06T20:12:39.841994Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmifxwwf kind= uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Processing vmi update" level=info timestamp=2018-08-06T20:12:39.860188Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmifxwwf kind= uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:12:43.548210Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmijv2vn kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-08-06T20:12:43.548803Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijv2vn, existing: false\n" level=info timestamp=2018-08-06T20:12:43.548866Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T20:12:43.549083Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijv2vn kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T20:12:43.551677Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijv2vn kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:12:43.552062Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijv2vn, existing: false\n" level=info timestamp=2018-08-06T20:12:43.552193Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T20:12:43.552761Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijv2vn kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T20:12:43.553283Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijv2vn kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:12:58.736433Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijv2vn, existing: false\n" level=info timestamp=2018-08-06T20:12:58.736764Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T20:12:58.736993Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijv2vn kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T20:12:58.737411Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijv2vn kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmi627mf-f98ps Pod phase: Running Pod name: virt-launcher-testvmifxwwf-chzgz Pod phase: Running level=info timestamp=2018-08-06T20:12:37.902772Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T20:12:39.011227Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T20:12:39.017416Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID d5069268-79b6-49a0-981f-b1915077bedf" level=info timestamp=2018-08-06T20:12:39.017673Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T20:12:39.025962Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:12:39.679299Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T20:12:39.711240Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:12:39.731190Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:12:39.731535Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T20:12:39.757672Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmifxwwf kind= uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T20:12:39.767841Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmifxwwf kind= uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:12:39.774529Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:12:39.817654Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:12:39.857482Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmifxwwf kind= uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:12:40.036684Z pos=monitor.go:222 component=virt-launcher msg="Found PID for d5069268-79b6-49a0-981f-b1915077bedf: 177" Pod name: vmi-killerdrgls Pod phase: Pending Pod name: vmi-killerk655v Pod phase: Succeeded • Failure [108.279 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:997 should be in Failed phase [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:998 Expected : Running to equal : Failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:1021 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-08-06T20:12:20.414040Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmifxwwf kind=VirtualMachineInstance uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmifxwwf-chzgz" level=info timestamp=2018-08-06T20:12:37.730953Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmifxwwf kind=VirtualMachineInstance uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmifxwwf-chzgz" level=info timestamp=2018-08-06T20:12:39.910224Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmifxwwf kind=VirtualMachineInstance uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T20:12:39.962032Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmifxwwf kind=VirtualMachineInstance uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." STEP: Killing the VirtualMachineInstance level=info timestamp=2018-08-06T20:12:50.118827Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmifxwwf kind=VirtualMachineInstance uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmifxwwf-chzgz" level=info timestamp=2018-08-06T20:12:50.118969Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmifxwwf kind=VirtualMachineInstance uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmifxwwf-chzgz" level=info timestamp=2018-08-06T20:12:50.119851Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmifxwwf kind=VirtualMachineInstance uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T20:12:50.120173Z pos=utils.go:257 component=tests namespace=kubevirt-test-default name=testvmifxwwf kind=VirtualMachineInstance uid=05e705a4-99b5-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." STEP: Checking that the VirtualMachineInstance has 'Failed' phase • [SLOW TEST:87.258 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:52 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:997 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:1025 ------------------------------ volumedisk0 compute • [SLOW TEST:43.388 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 ------------------------------ • ------------------------------ • [SLOW TEST:20.428 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:164 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.238 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:164 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:216 ------------------------------ • ------------------------------ • [SLOW TEST:100.836 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:340 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:341 ------------------------------ • [SLOW TEST:116.901 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model equals to passthrough /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:368 should report exactly the same model as node CPU /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:369 ------------------------------ • [SLOW TEST:119.569 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:392 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:393 ------------------------------ • [SLOW TEST:61.831 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:413 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:436 ------------------------------ • [SLOW TEST:38.981 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ •••••••••••Service cluster-ip-vmi successfully exposed for virtualmachineinstance testvmiphs85 ------------------------------ • [SLOW TEST:54.250 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service cluster-ip-target-vmi successfully exposed for virtualmachineinstance testvmiphs85 •Service node-port-vmi successfully exposed for virtualmachineinstance testvmiphs85 Pod name: disks-images-provider-dmzgd Pod phase: Running Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-2j49h Pod phase: Running 2018/08/06 20:23:47 http: TLS handshake error from 10.244.0.1:40560: EOF 2018/08/06 20:23:57 http: TLS handshake error from 10.244.0.1:40626: EOF 2018/08/06 20:24:07 http: TLS handshake error from 10.244.0.1:40704: EOF 2018/08/06 20:24:17 http: TLS handshake error from 10.244.0.1:40776: EOF 2018/08/06 20:24:27 http: TLS handshake error from 10.244.0.1:40850: EOF 2018/08/06 20:24:37 http: TLS handshake error from 10.244.0.1:40916: EOF 2018/08/06 20:24:47 http: TLS handshake error from 10.244.0.1:40982: EOF 2018/08/06 20:24:57 http: TLS handshake error from 10.244.0.1:41048: EOF 2018/08/06 20:25:07 http: TLS handshake error from 10.244.0.1:41118: EOF 2018/08/06 20:25:17 http: TLS handshake error from 10.244.0.1:41186: EOF 2018/08/06 20:25:27 http: TLS handshake error from 10.244.0.1:41252: EOF 2018/08/06 20:25:37 http: TLS handshake error from 10.244.0.1:41318: EOF 2018/08/06 20:25:47 http: TLS handshake error from 10.244.0.1:41384: EOF 2018/08/06 20:25:57 http: TLS handshake error from 10.244.0.1:41450: EOF 2018/08/06 20:26:07 http: TLS handshake error from 10.244.0.1:41518: EOF Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running level=info timestamp=2018-08-06T20:25:25.745965Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:25:25.994284Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:25:26.053107Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:25:34.807846Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:25:34.815603Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 20:25:35 http: TLS handshake error from 10.244.0.1:45912: EOF level=info timestamp=2018-08-06T20:25:36.282908Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:25:45 http: TLS handshake error from 10.244.0.1:45978: EOF level=info timestamp=2018-08-06T20:25:48.748109Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:25:51.651124Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:25:55 http: TLS handshake error from 10.244.0.1:46044: EOF level=info timestamp=2018-08-06T20:25:55.743747Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:25:56.105917Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 20:26:05 http: TLS handshake error from 10.244.0.1:46110: EOF level=info timestamp=2018-08-06T20:26:06.337281Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running Pod name: virt-controller-67dcdd8464-kkfsb Pod phase: Running level=info timestamp=2018-08-06T20:24:03.120811Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiltjtr kind= uid=a8dee9a3-99b6-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:24:03.225732Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiltjtr\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiltjtr" level=info timestamp=2018-08-06T20:24:03.668739Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiltjtr\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiltjtr, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a8dee9a3-99b6-11e8-aca8-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiltjtr" level=info timestamp=2018-08-06T20:24:04.184800Z pos=preset.go:167 component=virt-controller service=http namespace=kubevirt-test-default name=testvmilf5pc kind= uid=a97c8e23-99b6-11e8-aca8-525500d15501 msg="VirtualMachineInstance is excluded from VirtualMachinePresets" level=info timestamp=2018-08-06T20:24:04.185589Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmilf5pc kind= uid=a97c8e23-99b6-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:24:04.482500Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmilf5pc\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmilf5pc, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a97c8e23-99b6-11e8-aca8-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmilf5pc" level=info timestamp=2018-08-06T20:24:05.389052Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:24:05.389990Z pos=preset.go:255 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="VirtualMachineInstancePreset test-conflict-2v5tc matches VirtualMachineInstance" level=info timestamp=2018-08-06T20:24:05.390186Z pos=preset.go:255 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="VirtualMachineInstancePreset test-memory-6q4tb matches VirtualMachineInstance" level=error timestamp=2018-08-06T20:24:05.391494Z pos=preset.go:415 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="VirtualMachinePresets cannot be applied due to conflicts: presets 'test-memory-6q4tb' and 'test-conflict-2v5tc' conflict: spec.resources.requests[memory]: {{128 6} {} 128M DecimalSI} != {{256 6} {} 256M DecimalSI}" level=warning timestamp=2018-08-06T20:24:05.391611Z pos=preset.go:157 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as failed" level=info timestamp=2018-08-06T20:24:05.391700Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:24:05.625300Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiphs85 kind= uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:24:05.626885Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiphs85 kind= uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:24:06.580856Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiphs85\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiphs85" Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running Pod name: virt-controller-67dcdd8464-xkdqv Pod phase: Running level=info timestamp=2018-08-06T20:23:24.368893Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-cfch6 Pod phase: Running Pod name: virt-handler-ksc2j Pod phase: Running level=info timestamp=2018-08-06T20:24:43.547753Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi2mdhb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:24:43.547972Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi2mdhb, existing: false\n" level=info timestamp=2018-08-06T20:24:43.548041Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T20:24:43.548180Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi2mdhb kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T20:24:43.548391Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi2mdhb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:24:51.247734Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi2mdhb, existing: false\n" level=info timestamp=2018-08-06T20:24:51.247936Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T20:24:51.248150Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi2mdhb kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T20:24:51.248306Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi2mdhb kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:26:16.816839Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmiphs85, existing: true\n" level=info timestamp=2018-08-06T20:26:16.817452Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-08-06T20:26:16.817637Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T20:26:16.817906Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T20:26:16.818391Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmiphs85 kind=VirtualMachineInstance uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Processing vmi update" level=info timestamp=2018-08-06T20:26:16.853190Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmiphs85 kind=VirtualMachineInstance uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat89wth Pod phase: Failed ++ head -n 1 +++ nc 192.168.66.102 30017 -i 1 -w 1 Ncat: Connection timed out. + x= + echo '' failed + '[' '' = 'Hello World!' ']' + echo failed + exit 1 Pod name: netcatb2jjl Pod phase: Succeeded ++ head -n 1 +++ nc 192.168.66.101 30017 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatjrpsr Pod phase: Succeeded ++ head -n 1 +++ nc 10.111.146.160 27017 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi627mf-f98ps Pod phase: Running Pod name: virt-launcher-testvmiphs85-bkrbx Pod phase: Running level=info timestamp=2018-08-06T20:24:25.289786Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T20:24:25.307696Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:24:25.315578Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 3fc9b220-7bcd-400e-baf1-930ab5ee6e85" level=info timestamp=2018-08-06T20:24:25.316894Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T20:24:26.133840Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T20:24:26.157273Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:24:26.163557Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmiphs85 kind= uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T20:24:26.165670Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:24:26.165873Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T20:24:26.170946Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiphs85 kind= uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:24:26.197059Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:24:26.203288Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:24:26.235195Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiphs85 kind= uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:24:26.324464Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 3fc9b220-7bcd-400e-baf1-930ab5ee6e85: 179" level=info timestamp=2018-08-06T20:26:16.850882Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiphs85 kind=VirtualMachineInstance uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Synced vmi" Pod name: vmi-killerdrgls Pod phase: Pending ------------------------------ • Failure [82.386 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:124 Should expose a NodePort service on a VMI and connect to it [It] /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:129 Timed out after 60.000s. Expected : Failed to equal : Succeeded /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:160 ------------------------------ STEP: Exposing the service via virtctl command STEP: Getting back the the service STEP: Getting the node IP from all nodes STEP: Starting a pod which tries to reach the VMI via NodePort STEP: Waiting for the pod to report a successful connection attempt STEP: Starting a pod which tries to reach the VMI via NodePort STEP: Waiting for the pod to report a successful connection attempt Service cluster-ip-udp-vmi successfully exposed for virtualmachineinstance testvmisp56d • [SLOW TEST:55.212 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:173 Should expose a ClusterIP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:177 ------------------------------ Service node-port-udp-vmi successfully exposed for virtualmachineinstance testvmisp56d Pod name: disks-images-provider-dmzgd Pod phase: Running Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-2j49h Pod phase: Running 2018/08/06 20:26:07 http: TLS handshake error from 10.244.0.1:41518: EOF 2018/08/06 20:26:17 http: TLS handshake error from 10.244.0.1:41592: EOF 2018/08/06 20:26:27 http: TLS handshake error from 10.244.0.1:41660: EOF 2018/08/06 20:26:37 http: TLS handshake error from 10.244.0.1:41726: EOF level=info timestamp=2018-08-06T20:26:43.757969Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" 2018/08/06 20:26:47 http: TLS handshake error from 10.244.0.1:41800: EOF 2018/08/06 20:26:57 http: TLS handshake error from 10.244.0.1:41866: EOF 2018/08/06 20:27:07 http: TLS handshake error from 10.244.0.1:41932: EOF 2018/08/06 20:27:17 http: TLS handshake error from 10.244.0.1:41998: EOF 2018/08/06 20:27:27 http: TLS handshake error from 10.244.0.1:42064: EOF 2018/08/06 20:27:37 http: TLS handshake error from 10.244.0.1:42130: EOF 2018/08/06 20:27:47 http: TLS handshake error from 10.244.0.1:42196: EOF 2018/08/06 20:27:57 http: TLS handshake error from 10.244.0.1:42262: EOF 2018/08/06 20:28:07 http: TLS handshake error from 10.244.0.1:42328: EOF 2018/08/06 20:28:17 http: TLS handshake error from 10.244.0.1:42394: EOF Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running level=info timestamp=2018-08-06T20:27:36.554015Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:27:45 http: TLS handshake error from 10.244.0.1:46790: EOF level=info timestamp=2018-08-06T20:27:49.187141Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:27:52.477610Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:27:55 http: TLS handshake error from 10.244.0.1:46856: EOF level=info timestamp=2018-08-06T20:27:56.324408Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:27:56.401842Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:28:05 http: TLS handshake error from 10.244.0.1:46922: EOF level=info timestamp=2018-08-06T20:28:06.628762Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:28:15 http: TLS handshake error from 10.244.0.1:46988: EOF level=info timestamp=2018-08-06T20:28:19.292673Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:28:22.638244Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:28:25 http: TLS handshake error from 10.244.0.1:47056: EOF level=info timestamp=2018-08-06T20:28:26.298693Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:28:26.462152Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running Pod name: virt-controller-67dcdd8464-kkfsb Pod phase: Running level=info timestamp=2018-08-06T20:24:03.668739Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiltjtr\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmiltjtr, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a8dee9a3-99b6-11e8-aca8-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiltjtr" level=info timestamp=2018-08-06T20:24:04.184800Z pos=preset.go:167 component=virt-controller service=http namespace=kubevirt-test-default name=testvmilf5pc kind= uid=a97c8e23-99b6-11e8-aca8-525500d15501 msg="VirtualMachineInstance is excluded from VirtualMachinePresets" level=info timestamp=2018-08-06T20:24:04.185589Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmilf5pc kind= uid=a97c8e23-99b6-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:24:04.482500Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmilf5pc\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmilf5pc, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a97c8e23-99b6-11e8-aca8-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmilf5pc" level=info timestamp=2018-08-06T20:24:05.389052Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:24:05.389990Z pos=preset.go:255 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="VirtualMachineInstancePreset test-conflict-2v5tc matches VirtualMachineInstance" level=info timestamp=2018-08-06T20:24:05.390186Z pos=preset.go:255 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="VirtualMachineInstancePreset test-memory-6q4tb matches VirtualMachineInstance" level=error timestamp=2018-08-06T20:24:05.391494Z pos=preset.go:415 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="VirtualMachinePresets cannot be applied due to conflicts: presets 'test-memory-6q4tb' and 'test-conflict-2v5tc' conflict: spec.resources.requests[memory]: {{128 6} {} 128M DecimalSI} != {{256 6} {} 256M DecimalSI}" level=warning timestamp=2018-08-06T20:24:05.391611Z pos=preset.go:157 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as failed" level=info timestamp=2018-08-06T20:24:05.391700Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmil22hm kind= uid=aa347bac-99b6-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:24:05.625300Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiphs85 kind= uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:24:05.626885Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiphs85 kind= uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:24:06.580856Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiphs85\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiphs85" level=info timestamp=2018-08-06T20:26:23.359470Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisp56d kind= uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:26:23.364158Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisp56d kind= uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running Pod name: virt-controller-67dcdd8464-xkdqv Pod phase: Running level=info timestamp=2018-08-06T20:23:24.368893Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-cfch6 Pod phase: Running Pod name: virt-handler-ksc2j Pod phase: Running level=info timestamp=2018-08-06T20:26:43.353990Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmisp56d kind=Domain uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-08-06T20:26:43.417699Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisp56d kind= uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:26:43.417948Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmisp56d, existing: true\n" level=info timestamp=2018-08-06T20:26:43.418640Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-08-06T20:26:43.418805Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T20:26:43.418885Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T20:26:43.419059Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmisp56d kind= uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T20:26:43.421313Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-08-06T20:26:43.473391Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisp56d kind= uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:26:43.473538Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmisp56d, existing: true\n" level=info timestamp=2018-08-06T20:26:43.473596Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-08-06T20:26:43.473626Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-08-06T20:26:43.473646Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-08-06T20:26:43.473986Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmisp56d kind= uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="Processing vmi update" level=info timestamp=2018-08-06T20:26:43.485734Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisp56d kind= uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat6k26d Pod phase: Running ++ head -n 1 +++ nc -ul 31016 +++ nc -up 31016 192.168.66.102 31017 -i 1 -w 1 +++ echo Pod name: netcat89wth Pod phase: Failed ++ head -n 1 +++ nc 192.168.66.102 30017 -i 1 -w 1 Ncat: Connection timed out. + x= + echo '' failed + '[' '' = 'Hello World!' ']' + echo failed + exit 1 Pod name: netcatb2jjl Pod phase: Succeeded ++ head -n 1 +++ nc 192.168.66.101 30017 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatjrpsr Pod phase: Succeeded ++ head -n 1 +++ nc 10.111.146.160 27017 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatrb5l9 Pod phase: Succeeded ++ head -n 1 +++ nc -ul 28016 +++ nc -up 28016 10.110.57.247 28017 -i 1 -w 1 +++ echo + x='Hello UDP World!' + echo 'Hello UDP World!' Hello UDP World! + '[' 'Hello UDP World!' = 'Hello UDP World!' ']' + echo succeeded + exit 0 succeeded Pod name: netcatwxs6b Pod phase: Succeeded ++ head -n 1 +++ echo +++ nc -ul 29016 +++ nc -up 29016 10.104.128.72 29017 -i 1 -w 1 Hello UDP World! succeeded + x='Hello UDP World!' + echo 'Hello UDP World!' + '[' 'Hello UDP World!' = 'Hello UDP World!' ']' + echo succeeded + exit 0 Pod name: netcatx8l7x Pod phase: Succeeded ++ head -n 1 +++ nc -ul 31016 +++ echo +++ nc -up 31016 192.168.66.101 31017 -i 1 -w 1 + x='Hello UDP World!' Hello UDP World! + echo 'Hello UDP World!' + '[' 'Hello UDP World!' = 'Hello UDP World!' ']' + echo succeeded + exit 0 succeeded Pod name: virt-launcher-testvmi627mf-f98ps Pod phase: Running Pod name: virt-launcher-testvmiphs85-bkrbx Pod phase: Running level=info timestamp=2018-08-06T20:24:25.289786Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T20:24:25.307696Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:24:25.315578Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 3fc9b220-7bcd-400e-baf1-930ab5ee6e85" level=info timestamp=2018-08-06T20:24:25.316894Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T20:24:26.133840Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T20:24:26.157273Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:24:26.163557Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmiphs85 kind= uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T20:24:26.165670Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:24:26.165873Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T20:24:26.170946Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiphs85 kind= uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:24:26.197059Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:24:26.203288Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:24:26.235195Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiphs85 kind= uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:24:26.324464Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 3fc9b220-7bcd-400e-baf1-930ab5ee6e85: 179" level=info timestamp=2018-08-06T20:26:16.850882Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiphs85 kind=VirtualMachineInstance uid=aa55540f-99b6-11e8-aca8-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmisp56d-bxw4p Pod phase: Running level=info timestamp=2018-08-06T20:26:42.256072Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T20:26:43.048374Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T20:26:43.053987Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 64208904-31be-47f0-9025-dff7db41bbcd" level=info timestamp=2018-08-06T20:26:43.055397Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T20:26:43.061903Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:26:43.318201Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T20:26:43.348717Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:26:43.356755Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:26:43.369857Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T20:26:43.396566Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmisp56d kind= uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T20:26:43.405532Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisp56d kind= uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:26:43.415229Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T20:26:43.422822Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T20:26:43.480390Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisp56d kind= uid=fc7271d1-99b6-11e8-aca8-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T20:26:44.088914Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 64208904-31be-47f0-9025-dff7db41bbcd: 181" Pod name: vmi-killerdrgls Pod phase: Pending • Failure [83.554 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:205 Should expose a NodePort service on a VMI and connect to it [It] /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:210 Timed out after 60.000s. Expected : Running to equal : Succeeded /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:247 ------------------------------ STEP: Exposing the service via virtctl command STEP: Getting back the cluster IP given for the service STEP: Starting a pod which tries to reach the VMI via ClusterIP STEP: Getting the node IP from all nodes STEP: Starting a pod which tries to reach the VMI via NodePort STEP: Waiting for the pod to report a successful connection attempt STEP: Starting a pod which tries to reach the VMI via NodePort STEP: Waiting for the pod to report a successful connection attempt Service cluster-ip-vmirs successfully exposed for vmirs replicasetgjbzh • [SLOW TEST:70.011 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VMI replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:253 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:286 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:290 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachine testvmir88zg VM testvmir88zg was scheduled to start • [SLOW TEST:57.552 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:318 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:362 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:363 ------------------------------ • [SLOW TEST:42.144 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:38.066 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:229.265 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:239.787 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:49.224 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 ------------------------------ • [SLOW TEST:52.944 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 ------------------------------ • [SLOW TEST:39.499 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 ------------------------------ • [SLOW TEST:136.886 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 ------------------------------ Pod name: disks-images-provider-dmzgd Pod phase: Running Pod name: disks-images-provider-z2kx4 Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-2j49h Pod phase: Running 2018/08/06 20:46:37 http: TLS handshake error from 10.244.0.1:49766: EOF 2018/08/06 20:46:47 http: TLS handshake error from 10.244.0.1:49832: EOF 2018/08/06 20:46:57 http: TLS handshake error from 10.244.0.1:49898: EOF 2018/08/06 20:47:07 http: TLS handshake error from 10.244.0.1:49964: EOF 2018/08/06 20:47:17 http: TLS handshake error from 10.244.0.1:50030: EOF 2018/08/06 20:47:27 http: TLS handshake error from 10.244.0.1:50096: EOF 2018/08/06 20:47:37 http: TLS handshake error from 10.244.0.1:50162: EOF 2018/08/06 20:47:47 http: TLS handshake error from 10.244.0.1:50228: EOF 2018/08/06 20:47:57 http: TLS handshake error from 10.244.0.1:50294: EOF 2018/08/06 20:48:07 http: TLS handshake error from 10.244.0.1:50360: EOF 2018/08/06 20:48:17 http: TLS handshake error from 10.244.0.1:50426: EOF 2018/08/06 20:48:27 http: TLS handshake error from 10.244.0.1:50492: EOF 2018/08/06 20:48:37 http: TLS handshake error from 10.244.0.1:50558: EOF 2018/08/06 20:48:47 http: TLS handshake error from 10.244.0.1:50624: EOF 2018/08/06 20:48:57 http: TLS handshake error from 10.244.0.1:50690: EOF Pod name: virt-api-bcc6b587d-6jsc9 Pod phase: Running level=info timestamp=2018-08-06T20:48:24.656118Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:48:25 http: TLS handshake error from 10.244.0.1:55086: EOF level=info timestamp=2018-08-06T20:48:26.276944Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:48:28.745272Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:48:29.094924Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:48:34.139527Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:48:34.143058Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 20:48:35 http: TLS handshake error from 10.244.0.1:55152: EOF level=info timestamp=2018-08-06T20:48:39.717696Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:48:45 http: TLS handshake error from 10.244.0.1:55218: EOF level=info timestamp=2018-08-06T20:48:54.756181Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 20:48:55 http: TLS handshake error from 10.244.0.1:55284: EOF level=info timestamp=2018-08-06T20:48:56.288911Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T20:48:58.901653Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T20:48:59.171996Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-api-bcc6b587d-tsh68 Pod phase: Running Pod name: virt-controller-67dcdd8464-kkfsb Pod phase: Running level=info timestamp=2018-08-06T20:41:41.471590Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi9wn2d kind= uid=1fabf889-99b9-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:41:41.630412Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi9wn2d\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi9wn2d" level=info timestamp=2018-08-06T20:42:20.908917Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv2fv9 kind= uid=3731103b-99b9-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:42:20.913487Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv2fv9 kind= uid=3731103b-99b9-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:42:21.066613Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiv2fv9\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiv2fv9" level=info timestamp=2018-08-06T20:43:59.248204Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv2fv9 kind= uid=71cdafc1-99b9-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:43:59.252157Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv2fv9 kind= uid=71cdafc1-99b9-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:43:59.616964Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiv2fv9\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiv2fv9" level=info timestamp=2018-08-06T20:44:37.988650Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6n6hj kind= uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:44:37.991563Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6n6hj kind= uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:45:44.037278Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6n6hj kind= uid=b043e909-99b9-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:45:44.040973Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6n6hj kind= uid=b043e909-99b9-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:46:59.036626Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6n6hj kind= uid=dcf7078a-99b9-11e8-aca8-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-08-06T20:46:59.040290Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6n6hj kind= uid=dcf7078a-99b9-11e8-aca8-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-08-06T20:46:59.231667Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6n6hj\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6n6hj" Pod name: virt-controller-67dcdd8464-l7wpq Pod phase: Running Pod name: virt-controller-67dcdd8464-xkdqv Pod phase: Running level=info timestamp=2018-08-06T20:23:24.368893Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-cfch6 Pod phase: Running Pod name: virt-handler-ksc2j Pod phase: Running level=info timestamp=2018-08-06T20:46:58.628256Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:47:15.557192Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi6n6hj, existing: false\n" level=info timestamp=2018-08-06T20:47:15.557775Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T20:47:15.558132Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T20:47:15.560180Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:47:15.598698Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi6n6hj, existing: true\n" level=info timestamp=2018-08-06T20:47:15.598905Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-08-06T20:47:15.598997Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T20:47:15.599266Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmi6n6hj kind= uid=dcf7078a-99b9-11e8-aca8-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T20:47:15.633298Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6n6hj kind= uid=dcf7078a-99b9-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T20:47:15.633628Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi6n6hj, existing: true\n" level=info timestamp=2018-08-06T20:47:15.633685Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T20:47:15.633749Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T20:47:15.633958Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmi6n6hj kind= uid=dcf7078a-99b9-11e8-aca8-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T20:47:15.634195Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi6n6hj kind= uid=dcf7078a-99b9-11e8-aca8-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmi6n6hj-bgd58 Pod phase: Running level=info timestamp=2018-08-06T20:47:04.445479Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-08-06T20:47:04.445728Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-08-06T20:47:04.450403Z pos=libvirt.go:261 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-08-06T20:47:14.735673Z pos=libvirt.go:276 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-08-06T20:47:14.846846Z pos=virt-launcher.go:146 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmi6n6hj" level=info timestamp=2018-08-06T20:47:14.848880Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-08-06T20:47:14.849614Z pos=virt-launcher.go:63 component=virt-launcher msg="Marked as ready" • Failure [274.085 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 Timed out after 120.062s. Timed out waiting for VMI to enter Running phase Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1078 ------------------------------ STEP: Starting and stopping the VirtualMachineInstance number of times STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start level=info timestamp=2018-08-06T20:44:38.277042Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi6n6hj-gd26q" level=info timestamp=2018-08-06T20:44:57.293399Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi6n6hj-gd26q" level=info timestamp=2018-08-06T20:44:59.402907Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T20:44:59.414274Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start level=info timestamp=2018-08-06T20:45:44.151605Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi6n6hj-gd26q" level=info timestamp=2018-08-06T20:45:44.151830Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi6n6hj-gd26q" level=info timestamp=2018-08-06T20:45:44.152545Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T20:45:44.152974Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start level=info timestamp=2018-08-06T20:46:59.201763Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi6n6hj-gd26q" level=info timestamp=2018-08-06T20:46:59.201977Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi6n6hj-gd26q" level=info timestamp=2018-08-06T20:46:59.202635Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T20:46:59.203060Z pos=utils.go:246 component=tests namespace=kubevirt-test-default name=testvmi6n6hj kind=VirtualMachineInstance uid=88e3f93c-99b9-11e8-aca8-525500d15501 msg="VirtualMachineInstance started." • [SLOW TEST:21.198 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:21.231 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:21.395 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ •• ------------------------------ • [SLOW TEST:20.507 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ •• ------------------------------ • [SLOW TEST:63.368 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ panic: test timed out after 1h30m0s goroutine 8006 [running]: testing.(*M).startAlarm.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1240 +0xfc created by time.goFunc /gimme/.gimme/versions/go1.10.linux.amd64/src/time/sleep.go:172 +0x44 goroutine 1 [chan receive, 90 minutes]: testing.(*T).Run(0xc42041fe00, 0x139f775, 0x9, 0x1431cc8, 0x4801e6) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:825 +0x301 testing.runTests.func1(0xc42041fd10) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1063 +0x64 testing.tRunner(0xc42041fd10, 0xc420693df8) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 testing.runTests(0xc42088b460, 0x1d33a50, 0x1, 0x1, 0x412009) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1061 +0x2c4 testing.(*M).Run(0xc420a42580, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:978 +0x171 main.main() _testmain.go:44 +0x151 goroutine 20 [chan receive]: kubevirt.io/kubevirt/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x1d5f280) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:879 +0x8b created by kubevirt.io/kubevirt/vendor/github.com/golang/glog.init.0 /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:410 +0x203 goroutine 21 [syscall, 90 minutes]: os/signal.signal_recv(0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/sigqueue.go:139 +0xa6 os/signal.loop() /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:22 +0x22 created by os/signal.init.0 /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:28 +0x41 goroutine 7 [select]: kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).match(0xc421020c40, 0x14c5a60, 0x1d7d938, 0x412801, 0x0, 0x0, 0x0, 0x1d7d938) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:139 +0x2e6 kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).Should(0xc421020c40, 0x14c5a60, 0x1d7d938, 0x0, 0x0, 0x0, 0xc421020c40) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:48 +0x62 kubevirt.io/kubevirt/tests_test.glob..func13.3.8() /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:313 +0x8e9 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc420129f80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0x9c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc420129f80, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x13e kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc4205ed9e0, 0x14b7ce0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7f kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc420530e10, 0x0, 0x14b7ce0, 0xc4200fd4c0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:203 +0x648 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc420530e10, 0x14b7ce0, 0xc4200fd4c0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xff kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc420145e00, 0xc420530e10, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10d kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc420145e00, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x329 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc420145e00, 0xb) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x11b kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc4200faaf0, 0x7f60ffb0e8c0, 0xc42041fe00, 0x13a1d58, 0xb, 0xc42088b4a0, 0x2, 0x2, 0x14d45e0, 0xc4200fd4c0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x27c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x14b8d40, 0xc42041fe00, 0x13a1d58, 0xb, 0xc42088b480, 0x2, 0x2, 0x2) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:221 +0x258 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x14b8d40, 0xc42041fe00, 0x13a1d58, 0xb, 0xc4203cde40, 0x1, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:209 +0xab kubevirt.io/kubevirt/tests_test.TestTests(0xc42041fe00) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc42041fe00, 0x1431cc8) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:824 +0x2e0 goroutine 8 [chan receive, 90 minutes]: kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).registerForInterrupts(0xc420145e00, 0xc4200e1c80) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:223 +0xd1 created by kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:60 +0x88 goroutine 9 [select, 90 minutes, locked to thread]: runtime.gopark(0x1433ea0, 0x0, 0x139c297, 0x6, 0x18, 0x1) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/proc.go:291 +0x11a runtime.selectgo(0xc42047df50, 0xc4200e1d40) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/select.go:392 +0xe50 runtime.ensureSigM.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/signal_unix.go:549 +0x1f4 runtime.goexit() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/asm_amd64.s:2361 +0x1 goroutine 51 [IO wait]: internal/poll.runtime_pollWait(0x7f60ffb98f00, 0x72, 0xc420d47850) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/netpoll.go:173 +0x57 internal/poll.(*pollDesc).wait(0xc4204dce18, 0x72, 0xffffffffffffff00, 0x14b9f00, 0x1c4a7d0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:85 +0x9b internal/poll.(*pollDesc).waitRead(0xc4204dce18, 0xc420d06000, 0x8000, 0x8000) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:90 +0x3d internal/poll.(*FD).Read(0xc4204dce00, 0xc420d06000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_unix.go:157 +0x17d net.(*netFD).Read(0xc4204dce00, 0xc420d06000, 0x8000, 0x8000, 0x0, 0x8, 0x7ffb) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/fd_unix.go:202 +0x4f net.(*conn).Read(0xc4200f8850, 0xc420d06000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/net.go:176 +0x6a crypto/tls.(*block).readFromUntil(0xc4207da810, 0x7f60ffbaf608, 0xc4200f8850, 0x5, 0xc4200f8850, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:493 +0x96 crypto/tls.(*Conn).readRecord(0xc42070a000, 0x1433f17, 0xc42070a120, 0x20) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:595 +0xe0 crypto/tls.(*Conn).Read(0xc42070a000, 0xc4207d5000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:1156 +0x100 bufio.(*Reader).Read(0xc4206a3f80, 0xc4207442d8, 0x9, 0x9, 0xc42074d1f8, 0xc4209e6e80, 0xc420d47d10) /gimme/.gimme/versions/go1.10.linux.amd64/src/bufio/bufio.go:216 +0x238 io.ReadAtLeast(0x14b6ae0, 0xc4206a3f80, 0xc4207442d8, 0x9, 0x9, 0x9, 0xc420d47ce0, 0xc420d47ce0, 0x406614) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:309 +0x86 io.ReadFull(0x14b6ae0, 0xc4206a3f80, 0xc4207442d8, 0x9, 0x9, 0xc42074d1a0, 0xc420d47d10, 0xc400003101) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:327 +0x58 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.readFrameHeader(0xc4207442d8, 0x9, 0x9, 0x14b6ae0, 0xc4206a3f80, 0x0, 0xc400000000, 0x7efa6d, 0xc420d47fb0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:237 +0x7b kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc4207442a0, 0xc4207da210, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:492 +0xa4 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc420d47fb0, 0x1432c20, 0xc420477fb0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1428 +0x8e kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc4207d8820) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1354 +0x76 created by kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Transport).newClientConn /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:579 +0x651 goroutine 5919 [chan send, 16 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4206c9440) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 836 [chan receive, 84 minutes]: kubevirt.io/kubevirt/pkg/kubecli.(*asyncWSRoundTripper).WebsocketCallback(0xc4205dc000, 0xc420144140, 0xc420a28090, 0x0, 0x0, 0x18, 0xc420d19ec8) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:163 +0x32b kubevirt.io/kubevirt/pkg/kubecli.(*asyncWSRoundTripper).WebsocketCallback-fm(0xc420144140, 0xc420a28090, 0x0, 0x0, 0xc420144140, 0xc420a28090) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:313 +0x52 kubevirt.io/kubevirt/pkg/kubecli.(*WebsocketRoundTripper).RoundTrip(0xc4205dc840, 0xc420140500, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:142 +0xab kubevirt.io/kubevirt/pkg/kubecli.(*vmis).asyncSubresourceHelper.func1(0x14b6fc0, 0xc4205dc840, 0xc420140500, 0xc4205ca900) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:328 +0x56 created by kubevirt.io/kubevirt/pkg/kubecli.(*vmis).asyncSubresourceHelper /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:326 +0x33a goroutine 1402 [chan send, 80 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4208c6c00) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 1706 [chan send, 78 minutes]: kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1.1(0x14f22e0, 0xc420503ec0, 0xc4204bc068, 0xc420723a40, 0xc42000f608, 0xc42000f628) /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:81 +0x138 created by kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1 /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:73 +0x386 goroutine 5437 [chan send, 20 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4205e6cc0) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 6547 [chan send, 10 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4206c8330) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 6158 [chan send, 15 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4206f70b0) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 6896 [chan send, 7 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc42075d7d0) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 6757 [chan send, 8 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc42075d980) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 make: *** [functest] Error 2 + make cluster-down ./cluster/down.sh