+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release + [[ k8s-1.11.0-release =~ openshift-.* ]] + [[ k8s-1.11.0-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/31 11:03:44 Waiting for host: 192.168.66.101:22 2018/07/31 11:03:47 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/31 11:03:55 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/31 11:04:00 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0731 11:04:00.344580 1257 feature_gate.go:230] feature gates: &{map[]} I0731 11:04:00.407534 1257 kernel_validator.go:81] Validating kernel version I0731 11:04:00.407794 1257 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 59.013395 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:41377ae9085bd5a7f6a3782cee4c5b8a9d7119e172588d4f052e033c3ed6ffab + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/07/31 11:05:15 Waiting for host: 192.168.66.102:22 2018/07/31 11:05:18 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/31 11:05:26 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/31 11:05:31 Connected to tcp://192.168.66.102:22 ++ systemctl status docker ++ wc -l ++ grep active + [[ 0 -eq 0 ]] + sleep 2 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0731 11:05:35.094973 1266 kernel_validator.go:81] Validating kernel version I0731 11:05:35.095462 1266 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 56s v1.11.0 node02 Ready 22s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 57s v1.11.0 node02 Ready 23s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.39 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> b00c84523b53 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> b76b8bd8cd39 Step 5/8 : USER 1001 ---> Using cache ---> b6d9ad9ed232 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 7275484f38b1 Removing intermediate container 4b8adbee5d2f Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 0508a538ac40 ---> 30aee009019f Removing intermediate container 0508a538ac40 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release1" '' "virt-controller" '' ---> Running in 2453c8bfc841 ---> 962d87fc841e Removing intermediate container 2453c8bfc841 Successfully built 962d87fc841e Sending build context to Docker daemon 43.32 MB Step 1/9 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 945996802736 Step 3/9 : RUN dnf -y install socat genisoimage && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 1dcd22d08d0e Step 4/9 : COPY virt-launcher /usr/bin/virt-launcher ---> 7e1fb608ac17 Removing intermediate container bd0866c10adf Step 5/9 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in f5a9a57234ed  ---> a1a88f2432db Removing intermediate container f5a9a57234ed Step 6/9 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 988e053452c4  ---> 139af589bcf9 Removing intermediate container 988e053452c4 Step 7/9 : COPY sock-connector /usr/share/kubevirt/virt-launcher/ ---> 3a71a807811b Removing intermediate container 0297b7bbca38 Step 8/9 : ENTRYPOINT /usr/bin/virt-launcher ---> Running in 896e4d58d3c4 ---> 03b9f22281e2 Removing intermediate container 896e4d58d3c4 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release1" '' "virt-launcher" '' ---> Running in 1917d57a0a4d ---> b499484fbc9c Removing intermediate container 1917d57a0a4d Successfully built b499484fbc9c Sending build context to Docker daemon 41.69 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> aac354a8d955 Removing intermediate container a4e44dd6fb1a Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 183e135831e8 ---> cebb9826298a Removing intermediate container 183e135831e8 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release1" '' "virt-handler" '' ---> Running in a8a3d627c93b ---> 76d5185ac121 Removing intermediate container a8a3d627c93b Successfully built 76d5185ac121 Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> ed1ebf600ee1 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 0769dad023e5 Step 5/8 : USER 1001 ---> Using cache ---> 0cb65afb0c2b Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 2765127af501 Removing intermediate container b923ca27a1af Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 6649d77e5dc8 ---> d838cc78e192 Removing intermediate container 6649d77e5dc8 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release1" '' "virt-api" '' ---> Running in 1f9a27498849 ---> f0e840c16336 Removing intermediate container 1f9a27498849 Successfully built f0e840c16336 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/7 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 02134835a6aa Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> ec0843818da7 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 754029bb4bd2 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.11.0-release1" '' ---> Using cache ---> 414a14b84634 Successfully built 414a14b84634 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/5 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 207487abe7b2 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release1" '' "vm-killer" '' ---> Using cache ---> 99323d8b7eef Successfully built 99323d8b7eef Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 5734d749eb5c Step 3/7 : ENV container docker ---> Using cache ---> f8775a77966f Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 1a40cf222a61 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 77b545d92fe7 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> dfe20d463305 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release1" '' "registry-disk-v1alpha" '' ---> Using cache ---> 1e0e2ffb13f8 Successfully built 1e0e2ffb13f8 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33238/kubevirt/registry-disk-v1alpha:devel ---> 1e0e2ffb13f8 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> a1fb4d5306d3 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> a054124d5ac2 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-release1" '' ---> Using cache ---> cb18c92749c2 Successfully built cb18c92749c2 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33238/kubevirt/registry-disk-v1alpha:devel ---> 1e0e2ffb13f8 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 1a4d595e9286 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 8c609e00a270 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-release1" '' ---> Using cache ---> 0066b03b4151 Successfully built 0066b03b4151 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33238/kubevirt/registry-disk-v1alpha:devel ---> 1e0e2ffb13f8 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 1a4d595e9286 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> c1855d1ef188 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-release1" '' ---> Using cache ---> 8e409f932102 Successfully built 8e409f932102 Sending build context to Docker daemon 35.59 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 985fe391c056 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 3b2cae8ac543 Step 5/8 : USER 1001 ---> Using cache ---> 0c06e5b4a900 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 6014bae34bae Removing intermediate container db4d67a8feae Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 1a56cc164f2b ---> 52477504d58f Removing intermediate container 1a56cc164f2b Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release1" '' "subresource-access-test" '' ---> Running in b1bb4d321ac2 ---> a511dd34e409 Removing intermediate container b1bb4d321ac2 Successfully built a511dd34e409 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/9 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> d3456b1644b1 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 0ba81fddbba1 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 5d33abe3f819 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 783826523be1 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 711bc8d15952 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release1" '' "winrmcli" '' ---> Using cache ---> a9cfd34768a8 Successfully built a9cfd34768a8 Sending build context to Docker daemon 36.8 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> e3238544ad97 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> b4d903bb4e60 Removing intermediate container 6de528fc4901 Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in b696a5d6af8c ---> 258ef4f7d506 Removing intermediate container b696a5d6af8c Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.11.0-release1" '' ---> Running in 1a349f965554 ---> fe8348e2f87b Removing intermediate container 1a349f965554 Successfully built fe8348e2f87b hack/build-docker.sh push The push refers to a repository [localhost:33238/kubevirt/virt-controller] 0fa73472ca18: Preparing aa89340cf7a8: Preparing 891e1e4ef82a: Preparing aa89340cf7a8: Pushed 0fa73472ca18: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:6ab3ac7c5eceecbdbe2686a74526e2489127ae2a4906e43c1808860776e36bb9 size: 949 The push refers to a repository [localhost:33238/kubevirt/virt-launcher] fda1a44e83a9: Preparing 1624e8d6a81f: Preparing 7228113bb14a: Preparing 0dcbb2a0f1e2: Preparing af293cb2890d: Preparing da38cf808aa5: Preparing af293cb2890d: Waiting b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing 186d8b3e4fd8: Waiting 5eefb9960a36: Waiting fa6154170bf5: Waiting 891e1e4ef82a: Waiting b83399358a92: Waiting 1624e8d6a81f: Pushed fda1a44e83a9: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 7228113bb14a: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller af293cb2890d: Pushed 0dcbb2a0f1e2: Pushed 5eefb9960a36: Pushed devel: digest: sha256:c58b66b5faf685b1e26bc90dc1505be47312535885a0e73f873bb31f98688481 size: 2620 The push refers to a repository [localhost:33238/kubevirt/virt-handler] 714003ba07e8: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 714003ba07e8: Pushed devel: digest: sha256:4c4cca4d2a415076389a6b9a14a9226c0c11a01f2c3d17a830d1d7470080722f size: 741 The push refers to a repository [localhost:33238/kubevirt/virt-api] 64939c893591: Preparing 82fc744c99b4: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 82fc744c99b4: Pushed 64939c893591: Pushed devel: digest: sha256:c9993cbe1f835eac60e131042f6e103f7456c9a59111e11c05132c9a6d4be090 size: 948 The push refers to a repository [localhost:33238/kubevirt/disks-images-provider] 71ad31feb2c5: Preparing 21d4b721776e: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 71ad31feb2c5: Pushed 21d4b721776e: Pushed devel: digest: sha256:3fff917ba23c7fba74592c8555e61bec3316357dab43d2b2ccabed0831640425 size: 948 The push refers to a repository [localhost:33238/kubevirt/vm-killer] c4cfadeeaf5f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider c4cfadeeaf5f: Pushed devel: digest: sha256:281ea635078560864698461cb4abe89b57172ca6865a684af0e4b2cf816188d7 size: 740 The push refers to a repository [localhost:33238/kubevirt/registry-disk-v1alpha] 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 661cce8d8e52: Pushed 41e0baba3077: Pushed 25edbec0eaea: Pushed devel: digest: sha256:d199ed910bfef3798dcde2be16dbff1f4565a65ea5563954189f53ccf995809b size: 948 The push refers to a repository [localhost:33238/kubevirt/cirros-registry-disk-demo] fdc94b968141: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 661cce8d8e52: Mounted from kubevirt/registry-disk-v1alpha 41e0baba3077: Mounted from kubevirt/registry-disk-v1alpha fdc94b968141: Pushed devel: digest: sha256:c396cc68579764fef69cde5e6feeee0e0afc49f10a249787b69f27bd4c8c3ea1 size: 1160 The push refers to a repository [localhost:33238/kubevirt/fedora-cloud-registry-disk-demo] 7b61f60559fb: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 661cce8d8e52: Mounted from kubevirt/cirros-registry-disk-demo 41e0baba3077: Mounted from kubevirt/cirros-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo 7b61f60559fb: Pushed devel: digest: sha256:69596c3541167be5bbc98cf71438bc41db3bfeab3e8563b51965156974297ea4 size: 1161 The push refers to a repository [localhost:33238/kubevirt/alpine-registry-disk-demo] e8692a1785d2: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo 41e0baba3077: Mounted from kubevirt/fedora-cloud-registry-disk-demo 661cce8d8e52: Mounted from kubevirt/fedora-cloud-registry-disk-demo e8692a1785d2: Pushed devel: digest: sha256:3315c881270d83497e14319de2b1f3f4da6c22870e4538257299d5dbc2ac2911 size: 1160 The push refers to a repository [localhost:33238/kubevirt/subresource-access-test] c37cc7ebbbd8: Preparing 25cb73590a9d: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 25cb73590a9d: Pushed c37cc7ebbbd8: Pushed devel: digest: sha256:e6d0cf3175774c1c4eeded8c27d192487ef70e2f5baff1521e529ac5b3890b73 size: 948 The push refers to a repository [localhost:33238/kubevirt/winrmcli] f8083e002d0b: Preparing 53c709abc882: Preparing 9ca98a0f492b: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test f8083e002d0b: Pushed 9ca98a0f492b: Pushed 53c709abc882: Pushed devel: digest: sha256:613f2394c989f4dc508911acfc5f152f239d338144b132829eb82a48de710ed7 size: 1165 The push refers to a repository [localhost:33238/kubevirt/example-hook-sidecar] 374b872bd53f: Preparing 39bae602f753: Preparing 374b872bd53f: Pushed 39bae602f753: Pushed devel: digest: sha256:56fa455ac69553fe8eda585a8d024da05cbca133625150d9784951512ee18148 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.11.0-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.11.0-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.11.0-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-169-gd2c5ab3 ++ KUBEVIRT_VERSION=v0.7.0-169-gd2c5ab3 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33238/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.11.0-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.11.0-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.11.0-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-169-gd2c5ab3 ++ KUBEVIRT_VERSION=v0.7.0-169-gd2c5ab3 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33238/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.11.0-release ]] + [[ k8s-1.11.0-release =~ .*-dev ]] + [[ k8s-1.11.0-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io/kubevirt.io:admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:edit created clusterrole.rbac.authorization.k8s.io/kubevirt.io:view created serviceaccount/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver-auth-delegator created rolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created role.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-controller created serviceaccount/kubevirt-controller created serviceaccount/kubevirt-privileged created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller-cluster-admin created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-privileged-cluster-admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:default created clusterrolebinding.rbac.authorization.k8s.io/kubevirt.io:default created service/virt-api created deployment.extensions/virt-api created deployment.extensions/virt-controller created daemonset.extensions/virt-handler created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstances.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancereplicasets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancepresets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachines.kubevirt.io created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim/disk-alpine created persistentvolume/host-path-disk-alpine created persistentvolumeclaim/disk-custom created persistentvolume/host-path-disk-custom created daemonset.extensions/disks-images-provider created serviceaccount/kubevirt-testing created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-testing-cluster-admin created + [[ k8s-1.11.0 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-bcc6b587d-5tm27 0/1 ContainerCreating 0 3s virt-api-bcc6b587d-6thbp 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-8wfdk 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-bkbrt 0/1 ContainerCreating 0 3s virt-handler-fsclq 0/1 ContainerCreating 0 3s virt-handler-t8hq9 0/1 ContainerCreating 0 3s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-p9txx 0/1 Pending 0 1s disks-images-provider-rt8kh 0/1 Pending 0 2s virt-api-bcc6b587d-5tm27 0/1 ContainerCreating 0 5s virt-api-bcc6b587d-6thbp 0/1 ContainerCreating 0 5s virt-controller-67dcdd8464-8wfdk 0/1 ContainerCreating 0 5s virt-controller-67dcdd8464-bkbrt 0/1 ContainerCreating 0 5s virt-handler-fsclq 0/1 ContainerCreating 0 5s virt-handler-t8hq9 0/1 ContainerCreating 0 5s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n false ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-b42q8 1/1 Running 0 15m coredns-78fcdf6894-dgl2g 1/1 Running 0 15m disks-images-provider-p9txx 1/1 Running 0 1m disks-images-provider-rt8kh 1/1 Running 0 1m etcd-node01 1/1 Running 0 14m kube-apiserver-node01 1/1 Running 0 14m kube-controller-manager-node01 1/1 Running 0 14m kube-flannel-ds-4g9st 1/1 Running 0 14m kube-flannel-ds-vrfkf 1/1 Running 0 15m kube-proxy-l8j4l 1/1 Running 0 14m kube-proxy-xd96f 1/1 Running 0 15m kube-scheduler-node01 1/1 Running 0 14m virt-api-bcc6b587d-5tm27 1/1 Running 0 1m virt-api-bcc6b587d-6thbp 1/1 Running 0 1m virt-controller-67dcdd8464-8wfdk 1/1 Running 0 1m virt-controller-67dcdd8464-bkbrt 1/1 Running 0 1m virt-handler-fsclq 1/1 Running 0 1m virt-handler-t8hq9 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default NAME READY STATUS RESTARTS AGE local-volume-provisioner-6tnwx 1/1 Running 0 14m local-volume-provisioner-n6cxf 1/1 Running 0 15m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/junit.xml' + [[ k8s-1.11.0-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1533036060 Will run 151 of 151 specs • [SLOW TEST:50.639 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:21.884 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:20.730 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ •• ------------------------------ • [SLOW TEST:18.465 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ •• ------------------------------ • [SLOW TEST:73.000 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:111.028 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:69.546 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ • [SLOW TEST:419.477 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 ------------------------------ • [SLOW TEST:93.923 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ Pod name: disks-images-provider-p9txx Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-rt8kh Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-5tm27 Pod phase: Running level=info timestamp=2018-07-31T11:39:36.441926Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-31T11:39:42.966025Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:39:44 http: TLS handshake error from 10.244.1.1:55360: EOF level=info timestamp=2018-07-31T11:39:49.223793Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:39:54 http: TLS handshake error from 10.244.1.1:55366: EOF level=info timestamp=2018-07-31T11:40:02.970291Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:40:04 http: TLS handshake error from 10.244.1.1:55372: EOF level=info timestamp=2018-07-31T11:40:05.401221Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-31T11:40:06.457611Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-31T11:40:13.056227Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-31T11:40:13.250327Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-31T11:40:13.259394Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/31 11:40:14 http: TLS handshake error from 10.244.1.1:55378: EOF level=info timestamp=2018-07-31T11:40:19.582603Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:40:24 http: TLS handshake error from 10.244.1.1:55384: EOF Pod name: virt-api-bcc6b587d-6thbp Pod phase: Running 2018/07/31 11:38:01 http: TLS handshake error from 10.244.0.1:47648: EOF 2018/07/31 11:38:11 http: TLS handshake error from 10.244.0.1:47708: EOF 2018/07/31 11:38:21 http: TLS handshake error from 10.244.0.1:47768: EOF 2018/07/31 11:38:31 http: TLS handshake error from 10.244.0.1:47828: EOF 2018/07/31 11:38:41 http: TLS handshake error from 10.244.0.1:47888: EOF 2018/07/31 11:38:51 http: TLS handshake error from 10.244.0.1:47948: EOF 2018/07/31 11:39:01 http: TLS handshake error from 10.244.0.1:48008: EOF 2018/07/31 11:39:11 http: TLS handshake error from 10.244.0.1:48068: EOF 2018/07/31 11:39:21 http: TLS handshake error from 10.244.0.1:48128: EOF 2018/07/31 11:39:31 http: TLS handshake error from 10.244.0.1:48188: EOF 2018/07/31 11:39:41 http: TLS handshake error from 10.244.0.1:48248: EOF 2018/07/31 11:39:51 http: TLS handshake error from 10.244.0.1:48308: EOF 2018/07/31 11:40:01 http: TLS handshake error from 10.244.0.1:48368: EOF 2018/07/31 11:40:11 http: TLS handshake error from 10.244.0.1:48428: EOF 2018/07/31 11:40:21 http: TLS handshake error from 10.244.0.1:48488: EOF Pod name: virt-controller-67dcdd8464-8wfdk Pod phase: Running level=info timestamp=2018-07-31T11:35:42.971649Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=dba0d1bd-94b5-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:35:42.972570Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=dba0d1bd-94b5-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:35:43.006058Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:35:43.006167Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-31T11:35:43.062016Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:35:43.062125Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-31T11:35:43.093544Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:35:43.093625Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-31T11:35:43.103974Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminsc8b\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminsc8b" level=info timestamp=2018-07-31T11:35:58.201542Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:35:58.201838Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-31T11:36:00.193816Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:36:00.194388Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: true" level=info timestamp=2018-07-31T11:36:00.234959Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:36:00.235176Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvminsc8b kind= uid=db9a35f7-94b5-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: true" Pod name: virt-controller-67dcdd8464-bkbrt Pod phase: Running level=info timestamp=2018-07-31T11:19:35.373508Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-fsclq Pod phase: Running level=info timestamp=2018-07-31T11:19:44.085351Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-31T11:19:44.176443Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-31T11:19:44.177441Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-31T11:19:44.355928Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-31T11:19:44.377035Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-31T11:19:44.400736Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-handler-t8hq9 Pod phase: Running level=error timestamp=2018-07-31T11:36:33.692546Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmi4bmcd kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-31T11:36:33.692623Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmi4bmcd" level=info timestamp=2018-07-31T11:36:38.841543Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmi4bmcd kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-31T11:36:38.841885Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4bmcd, existing: false\n" level=info timestamp=2018-07-31T11:36:38.841961Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:36:38.842086Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4bmcd kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-31T11:36:38.842612Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4bmcd kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:36:38.843381Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4bmcd, existing: false\n" level=info timestamp=2018-07-31T11:36:38.843478Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:36:38.843595Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4bmcd kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-31T11:36:38.843758Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4bmcd kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:36:54.173943Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4bmcd, existing: false\n" level=info timestamp=2018-07-31T11:36:54.174965Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:36:54.175306Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4bmcd kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-31T11:36:54.175644Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4bmcd kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvminsc8b-8sxp9 Pod phase: Running level=info timestamp=2018-07-31T11:35:59.646610Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:35:59.660877Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 9783a8f4-bfe1-5ee7-93c0-9a964f1c34ac" level=info timestamp=2018-07-31T11:35:59.662421Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:35:59.662938Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:36:00.397040Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:36:00.460948Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:36:00.463115Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:36:00.477990Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:36:00.511217Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvminsc8b kind= uid=dba0d1bd-94b5-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:36:00.518679Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvminsc8b kind= uid=dba0d1bd-94b5-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:36:00.521318Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:36:00.531984Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:36:00.618359Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvminsc8b kind= uid=dba0d1bd-94b5-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:36:00.659199Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvminsc8b kind= uid=dba0d1bd-94b5-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:36:00.669918Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 9783a8f4-bfe1-5ee7-93c0-9a964f1c34ac: 164" 2018/07/31 07:40:31 read closing down: EOF • Failure [287.297 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 Timed out after 240.000s. No new VirtualMachineInstance instance showed up Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:429 ------------------------------ STEP: Creating new VMI, not running STEP: Starting the VirtualMachineInstance STEP: VMI has the running condition STEP: Getting the running VirtualMachineInstance STEP: Obtaining the serial console STEP: Guest shutdown STEP: waiting for the controller to replace the shut-down vmi with a new instance VM testvmi58dr7 was scheduled to start • [SLOW TEST:19.574 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ VM testvmi522vx was scheduled to stop • [SLOW TEST:64.396 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.014 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1384 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.012 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1384 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.017 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1384 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.012 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1384 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.010 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1384 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.011 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1384 ------------------------------ 2018/07/31 07:42:42 read closing down: EOF 2018/07/31 07:43:30 read closing down: EOF Pod name: disks-images-provider-p9txx Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-rt8kh Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-5tm27 Pod phase: Running level=info timestamp=2018-07-31T11:42:43.552251Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:42:44 http: TLS handshake error from 10.244.1.1:55474: EOF level=info timestamp=2018-07-31T11:42:50.346986Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:42:54 http: TLS handshake error from 10.244.1.1:55480: EOF level=info timestamp=2018-07-31T11:42:59.239029Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" level=info timestamp=2018-07-31T11:43:03.412911Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:43:04 http: TLS handshake error from 10.244.1.1:55490: EOF level=info timestamp=2018-07-31T11:43:05.819235Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-31T11:43:06.507297Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-31T11:43:13.279281Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-31T11:43:13.349196Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-31T11:43:13.659432Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:43:14 http: TLS handshake error from 10.244.1.1:55496: EOF level=info timestamp=2018-07-31T11:43:20.513692Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:43:24 http: TLS handshake error from 10.244.1.1:55502: EOF Pod name: virt-api-bcc6b587d-6thbp Pod phase: Running 2018/07/31 11:41:01 http: TLS handshake error from 10.244.0.1:48730: EOF 2018/07/31 11:41:11 http: TLS handshake error from 10.244.0.1:48790: EOF 2018/07/31 11:41:21 http: TLS handshake error from 10.244.0.1:48850: EOF 2018/07/31 11:41:31 http: TLS handshake error from 10.244.0.1:48910: EOF 2018/07/31 11:41:41 http: TLS handshake error from 10.244.0.1:48970: EOF 2018/07/31 11:41:51 http: TLS handshake error from 10.244.0.1:49030: EOF 2018/07/31 11:42:01 http: TLS handshake error from 10.244.0.1:49090: EOF 2018/07/31 11:42:11 http: TLS handshake error from 10.244.0.1:49150: EOF 2018/07/31 11:42:21 http: TLS handshake error from 10.244.0.1:49214: EOF 2018/07/31 11:42:31 http: TLS handshake error from 10.244.0.1:49274: EOF 2018/07/31 11:42:41 http: TLS handshake error from 10.244.0.1:49334: EOF 2018/07/31 11:42:51 http: TLS handshake error from 10.244.0.1:49394: EOF 2018/07/31 11:43:01 http: TLS handshake error from 10.244.0.1:49458: EOF 2018/07/31 11:43:11 http: TLS handshake error from 10.244.0.1:49518: EOF 2018/07/31 11:43:21 http: TLS handshake error from 10.244.0.1:49578: EOF Pod name: virt-controller-67dcdd8464-8wfdk Pod phase: Running level=info timestamp=2018-07-31T11:41:53.534019Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:41:53.534563Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-31T11:41:53.578364Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:41:53.579174Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-31T11:41:53.591741Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:41:53.592061Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-31T11:41:53.592218Z pos=vm.go:262 component=virt-controller service=http msg="vmi is nil" level=info timestamp=2018-07-31T11:41:53.612800Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:41:53.613048Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-31T11:41:53.613130Z pos=vm.go:262 component=virt-controller service=http msg="vmi is nil" level=info timestamp=2018-07-31T11:41:54.105458Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8llgw kind= uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:41:54.105763Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8llgw kind= uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:42:41.581664Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:42:41.583072Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:42:41.855765Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmisjg2f\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmisjg2f" Pod name: virt-controller-67dcdd8464-bkbrt Pod phase: Running level=info timestamp=2018-07-31T11:19:35.373508Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-fsclq Pod phase: Running level=info timestamp=2018-07-31T11:19:44.085351Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-31T11:19:44.176443Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-31T11:19:44.177441Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-31T11:19:44.355928Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-31T11:19:44.377035Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-31T11:19:44.400736Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-handler-t8hq9 Pod phase: Running level=info timestamp=2018-07-31T11:42:58.877792Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind=Domain uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-31T11:42:58.922729Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-31T11:42:58.924630Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:42:58.924718Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmisjg2f, existing: true\n" level=info timestamp=2018-07-31T11:42:58.924740Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-31T11:42:58.924766Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:42:58.924789Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:42:58.924871Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="No update processing required" level=info timestamp=2018-07-31T11:42:59.047508Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:42:59.047784Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmisjg2f, existing: true\n" level=info timestamp=2018-07-31T11:42:59.047904Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:42:59.047989Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:42:59.048038Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:42:59.048337Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:42:59.071523Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmi8llgw-ff85d Pod phase: Running level=info timestamp=2018-07-31T11:42:11.389412Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:42:11.405429Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:42:11.999693Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:42:12.057993Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:42:12.064156Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:42:12.080220Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi8llgw kind= uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:42:12.084964Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8llgw kind= uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:42:12.095218Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:42:12.123466Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:42:12.127927Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:42:12.149377Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-07-31T11:42:12.150056Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-31T11:42:12.150135Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-31T11:42:12.161396Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8llgw kind= uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:42:12.406698Z pos=monitor.go:222 component=virt-launcher msg="Found PID for ddc63a01-e0ee-455a-a76b-f6e7b632a362: 156" Pod name: virt-launcher-testvmisjg2f-z4sj5 Pod phase: Running level=info timestamp=2018-07-31T11:42:58.688693Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 2bbf33ca-9ace-45ac-a7d7-076b65990799" level=info timestamp=2018-07-31T11:42:58.689225Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:42:58.841458Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:42:58.873300Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:42:58.878741Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:42:58.897228Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:42:58.917056Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:42:58.919467Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:42:58.920627Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:42:58.924289Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:42:59.051796Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-07-31T11:42:59.052672Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-31T11:42:59.052760Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-31T11:42:59.070703Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:42:59.702312Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 2bbf33ca-9ace-45ac-a7d7-076b65990799: 159" • Failure [96.528 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: : { Err: { s: "command terminated with exit code 126", }, Code: 126, } command terminated with exit code 126 not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:88 ------------------------------ level=info timestamp=2018-07-31T11:41:55.478758Z pos=utils.go:256 component=tests namespace=kubevirt-test-default name=testvmi8llgw kind=VirtualMachineInstance uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi8llgw-ff85d" level=info timestamp=2018-07-31T11:42:10.850404Z pos=utils.go:256 component=tests namespace=kubevirt-test-default name=testvmi8llgw kind=VirtualMachineInstance uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi8llgw-ff85d" level=info timestamp=2018-07-31T11:42:12.861410Z pos=utils.go:256 component=tests namespace=kubevirt-test-default name=testvmi8llgw kind=VirtualMachineInstance uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-31T11:42:12.885852Z pos=utils.go:256 component=tests namespace=kubevirt-test-default name=testvmi8llgw kind=VirtualMachineInstance uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="VirtualMachineInstance started." level=info timestamp=2018-07-31T11:42:42.889625Z pos=utils.go:256 component=tests namespace=kubevirt-test-default name=testvmisjg2f kind=VirtualMachineInstance uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Created virtual machine pod virt-launcher-testvmisjg2f-z4sj5" level=info timestamp=2018-07-31T11:42:58.202882Z pos=utils.go:256 component=tests namespace=kubevirt-test-default name=testvmisjg2f kind=VirtualMachineInstance uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmisjg2f-z4sj5" level=info timestamp=2018-07-31T11:42:59.729216Z pos=utils.go:256 component=tests namespace=kubevirt-test-default name=testvmisjg2f kind=VirtualMachineInstance uid=d51d7494-94b6-11e8-b450-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-31T11:42:59.772441Z pos=utils.go:256 component=tests namespace=kubevirt-test-default name=testvmisjg2f kind=VirtualMachineInstance uid=d51d7494-94b6-11e8-b450-525500d15501 msg="VirtualMachineInstance started." STEP: have containerPort in the pod manifest STEP: start the virtual machine with slirp interface level=info timestamp=2018-07-31T11:43:31.177978Z pos=vmi_slirp_interface_test.go:87 component=tests msg= Pod name: disks-images-provider-p9txx Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-rt8kh Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-5tm27 Pod phase: Running level=info timestamp=2018-07-31T11:42:43.552251Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:42:44 http: TLS handshake error from 10.244.1.1:55474: EOF level=info timestamp=2018-07-31T11:42:50.346986Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:42:54 http: TLS handshake error from 10.244.1.1:55480: EOF level=info timestamp=2018-07-31T11:42:59.239029Z pos=subresource.go:75 component=virt-api msg="Websocket connection upgraded" level=info timestamp=2018-07-31T11:43:03.412911Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:43:04 http: TLS handshake error from 10.244.1.1:55490: EOF level=info timestamp=2018-07-31T11:43:05.819235Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-31T11:43:06.507297Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-31T11:43:13.279281Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-31T11:43:13.349196Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-31T11:43:13.659432Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:43:14 http: TLS handshake error from 10.244.1.1:55496: EOF level=info timestamp=2018-07-31T11:43:20.513692Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/31 11:43:24 http: TLS handshake error from 10.244.1.1:55502: EOF Pod name: virt-api-bcc6b587d-6thbp Pod phase: Running 2018/07/31 11:41:01 http: TLS handshake error from 10.244.0.1:48730: EOF 2018/07/31 11:41:11 http: TLS handshake error from 10.244.0.1:48790: EOF 2018/07/31 11:41:21 http: TLS handshake error from 10.244.0.1:48850: EOF 2018/07/31 11:41:31 http: TLS handshake error from 10.244.0.1:48910: EOF 2018/07/31 11:41:41 http: TLS handshake error from 10.244.0.1:48970: EOF 2018/07/31 11:41:51 http: TLS handshake error from 10.244.0.1:49030: EOF 2018/07/31 11:42:01 http: TLS handshake error from 10.244.0.1:49090: EOF 2018/07/31 11:42:11 http: TLS handshake error from 10.244.0.1:49150: EOF 2018/07/31 11:42:21 http: TLS handshake error from 10.244.0.1:49214: EOF 2018/07/31 11:42:31 http: TLS handshake error from 10.244.0.1:49274: EOF 2018/07/31 11:42:41 http: TLS handshake error from 10.244.0.1:49334: EOF 2018/07/31 11:42:51 http: TLS handshake error from 10.244.0.1:49394: EOF 2018/07/31 11:43:01 http: TLS handshake error from 10.244.0.1:49458: EOF 2018/07/31 11:43:11 http: TLS handshake error from 10.244.0.1:49518: EOF 2018/07/31 11:43:21 http: TLS handshake error from 10.244.0.1:49578: EOF Pod name: virt-controller-67dcdd8464-8wfdk Pod phase: Running level=info timestamp=2018-07-31T11:41:53.534019Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:41:53.534563Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-31T11:41:53.578364Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:41:53.579174Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-31T11:41:53.591741Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:41:53.592061Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-31T11:41:53.592218Z pos=vm.go:262 component=virt-controller service=http msg="vmi is nil" level=info timestamp=2018-07-31T11:41:53.612800Z pos=vm.go:135 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Started processing VM" level=info timestamp=2018-07-31T11:41:53.613048Z pos=vm.go:186 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi522vx kind= uid=929d111e-94b6-11e8-b450-525500d15501 msg="Creating or the VirtualMachineInstance: false" level=info timestamp=2018-07-31T11:41:53.613130Z pos=vm.go:262 component=virt-controller service=http msg="vmi is nil" level=info timestamp=2018-07-31T11:41:54.105458Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8llgw kind= uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:41:54.105763Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8llgw kind= uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:42:41.581664Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:42:41.583072Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:42:41.855765Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmisjg2f\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmisjg2f" Pod name: virt-controller-67dcdd8464-bkbrt Pod phase: Running level=info timestamp=2018-07-31T11:19:35.373508Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-fsclq Pod phase: Running level=info timestamp=2018-07-31T11:19:44.085351Z pos=virt-handler.go:87 component=virt-handler hostname=node01 level=info timestamp=2018-07-31T11:19:44.176443Z pos=vm.go:210 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-07-31T11:19:44.177441Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" level=info timestamp=2018-07-31T11:19:44.355928Z pos=device_controller.go:133 component=virt-handler msg="Starting device plugin controller" level=info timestamp=2018-07-31T11:19:44.377035Z pos=device_controller.go:127 component=virt-handler msg="tun device plugin started" level=info timestamp=2018-07-31T11:19:44.400736Z pos=device_controller.go:127 component=virt-handler msg="kvm device plugin started" Pod name: virt-handler-t8hq9 Pod phase: Running level=info timestamp=2018-07-31T11:42:58.877792Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind=Domain uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-31T11:42:58.922729Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-31T11:42:58.924630Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:42:58.924718Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmisjg2f, existing: true\n" level=info timestamp=2018-07-31T11:42:58.924740Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-31T11:42:58.924766Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:42:58.924789Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:42:58.924871Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="No update processing required" level=info timestamp=2018-07-31T11:42:59.047508Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:42:59.047784Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmisjg2f, existing: true\n" level=info timestamp=2018-07-31T11:42:59.047904Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:42:59.047989Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:42:59.048038Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:42:59.048337Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:42:59.071523Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmi8llgw-ff85d Pod phase: Running level=info timestamp=2018-07-31T11:42:11.389412Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:42:11.405429Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:42:11.999693Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:42:12.057993Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:42:12.064156Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:42:12.080220Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi8llgw kind= uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:42:12.084964Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8llgw kind= uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:42:12.095218Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:42:12.123466Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:42:12.127927Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:42:12.149377Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-07-31T11:42:12.150056Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-31T11:42:12.150135Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-31T11:42:12.161396Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8llgw kind= uid=b8d0dbab-94b6-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:42:12.406698Z pos=monitor.go:222 component=virt-launcher msg="Found PID for ddc63a01-e0ee-455a-a76b-f6e7b632a362: 156" Pod name: virt-launcher-testvmisjg2f-z4sj5 Pod phase: Running level=info timestamp=2018-07-31T11:42:58.688693Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 2bbf33ca-9ace-45ac-a7d7-076b65990799" level=info timestamp=2018-07-31T11:42:58.689225Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:42:58.841458Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:42:58.873300Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:42:58.878741Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:42:58.897228Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:42:58.917056Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:42:58.919467Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:42:58.920627Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:42:58.924289Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:42:59.051796Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-07-31T11:42:59.052672Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-31T11:42:59.052760Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-31T11:42:59.070703Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisjg2f kind= uid=d51d7494-94b6-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:42:59.702312Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 2bbf33ca-9ace-45ac-a7d7-076b65990799: 159" • Failure [0.884 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface with custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: : { Err: { s: "command terminated with exit code 126", }, Code: 126, } command terminated with exit code 126 not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:88 ------------------------------ STEP: have containerPort in the pod manifest STEP: start the virtual machine with slirp interface level=info timestamp=2018-07-31T11:43:32.061570Z pos=vmi_slirp_interface_test.go:87 component=tests msg= 2018/07/31 07:44:17 read closing down: EOF Service cluster-ip-vmi successfully exposed for virtualmachineinstance testvmidp9lk • [SLOW TEST:49.505 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service cluster-ip-target-vmi successfully exposed for virtualmachineinstance testvmidp9lk •Service node-port-vmi successfully exposed for virtualmachineinstance testvmidp9lk ------------------------------ • [SLOW TEST:8.357 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:124 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:129 ------------------------------ 2018/07/31 07:45:13 read closing down: EOF Service cluster-ip-udp-vmi successfully exposed for virtualmachineinstance testvmigxz5x • [SLOW TEST:48.384 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:173 Should expose a ClusterIP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:177 ------------------------------ Service node-port-udp-vmi successfully exposed for virtualmachineinstance testvmigxz5x • [SLOW TEST:9.463 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:205 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:210 ------------------------------ 2018/07/31 07:46:13 read closing down: EOF 2018/07/31 07:46:24 read closing down: EOF Service cluster-ip-vmirs successfully exposed for vmirs replicasetlt7ds • [SLOW TEST:59.961 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VMI replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:253 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:286 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:290 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachine testvmi6gmgz VM testvmi6gmgz was scheduled to start 2018/07/31 07:47:13 read closing down: EOF • [SLOW TEST:49.438 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:318 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:362 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:363 ------------------------------ •2018/07/31 07:48:45 read closing down: EOF 2018/07/31 07:48:55 read closing down: EOF 2018/07/31 07:49:06 read closing down: EOF 2018/07/31 07:49:17 read closing down: EOF 2018/07/31 07:49:18 read closing down: EOF 2018/07/31 07:49:20 read closing down: EOF 2018/07/31 07:49:20 read closing down: EOF ------------------------------ 2018/07/31 07:49:20 read closing down: EOF • [SLOW TEST:120.584 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/31 07:49:22 read closing down: EOF 2018/07/31 07:49:23 read closing down: EOF •2018/07/31 07:49:23 read closing down: EOF 2018/07/31 07:49:25 read closing down: EOF •2018/07/31 07:49:25 read closing down: EOF 2018/07/31 07:49:25 read closing down: EOF 2018/07/31 07:49:27 read closing down: EOF •2018/07/31 07:49:28 read closing down: EOF 2018/07/31 07:49:28 read closing down: EOF • ------------------------------ • [SLOW TEST:5.254 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:5.351 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 with a service matching the vmi exposed /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:272 should be able to reach the vmi based on labels specified on the vmi /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:292 ------------------------------ • [SLOW TEST:5.575 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 with a service matching the vmi exposed /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:272 should fail to reach the vmi if an invalid servicename is used /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:303 ------------------------------ •2018/07/31 07:50:30 read closing down: EOF 2018/07/31 07:50:31 read closing down: EOF ------------------------------ • [SLOW TEST:34.547 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:368 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:369 ------------------------------ 2018/07/31 07:50:31 read closing down: EOF •2018/07/31 07:50:32 read closing down: EOF 2018/07/31 07:51:06 read closing down: EOF 2018/07/31 07:51:07 read closing down: EOF ------------------------------ • [SLOW TEST:35.115 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:402 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:403 ------------------------------ 2018/07/31 07:54:25 read closing down: EOF Pod name: disks-images-provider-p9txx Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-rt8kh Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-5tm27 Pod phase: Running 2018/07/31 11:52:08 http: TLS handshake error from 10.244.1.1:36764: EOF 2018/07/31 11:52:18 http: TLS handshake error from 10.244.1.1:36766: EOF 2018/07/31 11:52:28 http: TLS handshake error from 10.244.1.1:36772: EOF 2018/07/31 11:52:38 http: TLS handshake error from 10.244.1.1:36778: EOF 2018/07/31 11:52:48 http: TLS handshake error from 10.244.1.1:36784: EOF 2018/07/31 11:52:58 http: TLS handshake error from 10.244.1.1:36790: EOF 2018/07/31 11:53:08 http: TLS handshake error from 10.244.1.1:36796: EOF 2018/07/31 11:53:18 http: TLS handshake error from 10.244.1.1:36802: EOF 2018/07/31 11:53:28 http: TLS handshake error from 10.244.1.1:36808: EOF 2018/07/31 11:53:38 http: TLS handshake error from 10.244.1.1:36814: EOF 2018/07/31 11:53:48 http: TLS handshake error from 10.244.1.1:36820: EOF 2018/07/31 11:53:58 http: TLS handshake error from 10.244.1.1:36826: EOF 2018/07/31 11:54:08 http: TLS handshake error from 10.244.1.1:36832: EOF 2018/07/31 11:54:18 http: TLS handshake error from 10.244.1.1:36838: EOF Pod name: virt-api-bcc6b587d-6thbp Pod phase: Running 2018/07/31 11:52:01 http: TLS handshake error from 10.244.0.1:52882: EOF 2018/07/31 11:52:11 http: TLS handshake error from 10.244.0.1:52942: EOF 2018/07/31 11:52:21 http: TLS handshake error from 10.244.0.1:53002: EOF 2018/07/31 11:52:31 http: TLS handshake error from 10.244.0.1:53062: EOF 2018/07/31 11:52:41 http: TLS handshake error from 10.244.0.1:53122: EOF 2018/07/31 11:52:51 http: TLS handshake error from 10.244.0.1:53182: EOF 2018/07/31 11:53:01 http: TLS handshake error from 10.244.0.1:53242: EOF 2018/07/31 11:53:11 http: TLS handshake error from 10.244.0.1:53302: EOF 2018/07/31 11:53:21 http: TLS handshake error from 10.244.0.1:53362: EOF 2018/07/31 11:53:31 http: TLS handshake error from 10.244.0.1:53422: EOF 2018/07/31 11:53:41 http: TLS handshake error from 10.244.0.1:53482: EOF 2018/07/31 11:53:51 http: TLS handshake error from 10.244.0.1:53542: EOF 2018/07/31 11:54:01 http: TLS handshake error from 10.244.0.1:53602: EOF 2018/07/31 11:54:11 http: TLS handshake error from 10.244.0.1:53662: EOF 2018/07/31 11:54:21 http: TLS handshake error from 10.244.0.1:53722: EOF Pod name: virt-controller-67dcdd8464-8wfdk Pod phase: Running level=info timestamp=2018-07-31T11:47:19.566690Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:47:19.568082Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:47:19.617778Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:47:19.618128Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:47:19.635383Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitvkpx\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitvkpx" level=info timestamp=2018-07-31T11:47:20.463716Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmidd47v\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmidd47v" level=info timestamp=2018-07-31T11:49:55.480922Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:49:55.482653Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:49:55.603504Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmign52t\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmign52t" level=info timestamp=2018-07-31T11:50:31.220061Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:50:31.220566Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.410206Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:51:06.414117Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.637360Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:51:06.654677Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" Pod name: virt-controller-67dcdd8464-bkbrt Pod phase: Running level=info timestamp=2018-07-31T11:52:03.227958Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-fsclq Pod phase: Running level=info timestamp=2018-07-31T11:50:48.992111Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=Domain uid=ed111071-94b7-11e8-b450-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-31T11:50:49.017437Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:50:49.018943Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: true\n" level=info timestamp=2018-07-31T11:50:49.019180Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-31T11:50:49.019223Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:50:49.019247Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:50:49.019310Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="No update processing required" level=info timestamp=2018-07-31T11:50:49.018294Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-31T11:50:49.048464Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:50:49.048556Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: true\n" level=info timestamp=2018-07-31T11:50:49.048585Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:50:49.048612Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:50:49.048629Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:50:49.048702Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:50:49.054159Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-t8hq9 Pod phase: Running level=info timestamp=2018-07-31T11:52:05.262907Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixs668 kind=VirtualMachineInstance uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:52:05.263077Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmixs668, existing: true\n" level=info timestamp=2018-07-31T11:52:05.263122Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-07-31T11:52:05.263180Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:52:05.263358Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="No update processing required" level=info timestamp=2018-07-31T11:52:05.263490Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:52:05.260195Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-07-31T11:52:05.265608Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:52:05.265822Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="No update processing required" level=info timestamp=2018-07-31T11:52:05.266121Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:52:05.267650Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmign52t, existing: true\n" level=info timestamp=2018-07-31T11:52:05.267766Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-07-31T11:52:05.267831Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:52:05.267946Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="No update processing required" level=info timestamp=2018-07-31T11:52:05.268198Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." Pod name: netcatc7gnv Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatgstpw Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatk9hjt Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatlprg9 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatsb6tf Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatscptn Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' failed + echo failed + exit 1 Pod name: netcatw5chf Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: virt-launcher-testvmi4586l-hvgjb Pod phase: Failed level=info timestamp=2018-07-31T11:47:47.721442Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:48.764775Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:48.776318Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.793854Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 77fef69a-00e7-4069-9268-639c85a76022" level=info timestamp=2018-07-31T11:47:48.794310Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:49.580993Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:49.652361Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:49.653323Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:49.660425Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:49.660541Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:49.665374Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:49.682489Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:49.690281Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:49.704717Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:49.826672Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 77fef69a-00e7-4069-9268-639c85a76022: 176" Pod name: virt-launcher-testvmi72gzd-svj9m Pod phase: Failed level=info timestamp=2018-07-31T11:47:49.615097Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:51.280348Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:51.660252Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:51.687362Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID f447efba-fe55-4f2f-b26c-5811112a5680" level=info timestamp=2018-07-31T11:47:51.687954Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:52.474352Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:52.546414Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:52.547393Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:52.585612Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:52.595294Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:52.595528Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:52.663006Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:52.731209Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:52.732037Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:52.743747Z pos=monitor.go:222 component=virt-launcher msg="Found PID for f447efba-fe55-4f2f-b26c-5811112a5680: 184" Pod name: virt-launcher-testvmidd47v-wckn5 Pod phase: Failed level=info timestamp=2018-07-31T11:47:46.548045Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:47.990581Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:48.017694Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.028790Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 1b54d7e0-a160-4fb7-bbe0-80cfda2c008c" level=info timestamp=2018-07-31T11:47:48.029322Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:48.394830Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:48.462138Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:48.469182Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.469314Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:48.493830Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmidd47v kind= uid=7acd9f60-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:48.497060Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmidd47v kind= uid=7acd9f60-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:48.504027Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:48.511082Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.569223Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmidd47v kind= uid=7acd9f60-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:49.044284Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1b54d7e0-a160-4fb7-bbe0-80cfda2c008c: 180" Pod name: virt-launcher-testvmign52t-bds62 Pod phase: Failed level=info timestamp=2018-07-31T11:50:10.713251Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:50:11.477056Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:50:11.489077Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 542a685b-e09e-425b-ac70-07d29da7b7ec" level=info timestamp=2018-07-31T11:50:11.490040Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:50:11.495212Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:11.657973Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:50:11.675287Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:11.678583Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:11.681689Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:50:11.691352Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:50:11.694277Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:11.695367Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:11.698184Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:11.778002Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:12.498039Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 542a685b-e09e-425b-ac70-07d29da7b7ec: 160" Pod name: virt-launcher-testvmikqjnv-vd4k6 Pod phase: Running level=info timestamp=2018-07-31T11:50:47.763929Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:50:48.711816Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID ca4123e2-2dcc-421a-816d-a0697b02ba75" level=info timestamp=2018-07-31T11:50:48.712339Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:50:48.713071Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:50:48.731990Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:48.970349Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:50:48.986415Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:48.992572Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:48.998317Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:50:49.008110Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:50:49.010367Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:49.015329Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:49.021181Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:49.053707Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:49.720984Z pos=monitor.go:222 component=virt-launcher msg="Found PID for ca4123e2-2dcc-421a-816d-a0697b02ba75: 160" Pod name: virt-launcher-testvmitvkpx-vjt2m Pod phase: Failed level=info timestamp=2018-07-31T11:47:45.200752Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:45.902694Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:45.918793Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 0419eb6e-3c84-408e-9f29-4c3085314eae" level=info timestamp=2018-07-31T11:47:45.919151Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:45.919155Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:46.932668Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 0419eb6e-3c84-408e-9f29-4c3085314eae: 176" level=info timestamp=2018-07-31T11:47:46.979209Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:47.098475Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:47.099314Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:47.120081Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:47.120325Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:47.122135Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:47.169074Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:47.179872Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:47.307318Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmixs668-fmdmv Pod phase: Failed level=info timestamp=2018-07-31T11:51:12.095124Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-31T11:51:12.095536Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-31T11:51:12.101291Z pos=libvirt.go:261 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-31T11:51:22.113962Z pos=libvirt.go:276 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-31T11:51:22.244053Z pos=virt-launcher.go:146 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmixs668" level=info timestamp=2018-07-31T11:51:22.246457Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-31T11:51:22.247185Z pos=virt-launcher.go:63 component=virt-launcher msg="Marked as ready" level=error timestamp=2018-07-31T11:51:23.310132Z pos=common.go:126 component=virt-launcher msg="updated MAC for interface: eth0 - 0a:58:0a:72:74:a3" level=info timestamp=2018-07-31T11:51:23.319489Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-31T11:51:23.319692Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-31T11:51:23.324075Z pos=dhcp.go:62 component=virt-launcher msg="Starting SingleClientDHCPServer" level=info timestamp=2018-07-31T11:51:23.476041Z pos=manager.go:158 component=virt-launcher namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Domain defined." level=info timestamp=2018-07-31T11:51:23.477518Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" • Failure [201.158 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address in non-conventional format /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:414 should configure custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:415 Expected error: : 180000000000 expect: timer expired after 180 seconds not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1102 ------------------------------ STEP: checking eth0 MAC address level=info timestamp=2018-07-31T11:51:07.667434Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmixs668 kind=VirtualMachineInstance uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Created virtual machine pod virt-launcher-testvmixs668-fmdmv" level=info timestamp=2018-07-31T11:51:23.950344Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmixs668 kind=VirtualMachineInstance uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmixs668-fmdmv" level=info timestamp=2018-07-31T11:51:25.378053Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmixs668 kind=VirtualMachineInstance uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-31T11:51:25.429561Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmixs668 kind=VirtualMachineInstance uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="VirtualMachineInstance started." level=info timestamp=2018-07-31T11:54:25.664197Z pos=utils.go:1291 component=tests namespace=kubevirt-test-default name=testvmixs668 kind=VirtualMachineInstance uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Login: [{2 \r\n\r\n\r\nISOLINUX 6.04 6.04-pre1 Copyright (C) 1994-2015 H. Peter Anvin et al\r\nboot: \u001b[?7h\r\n []}]" Pod name: disks-images-provider-p9txx Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-rt8kh Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-5tm27 Pod phase: Running 2018/07/31 11:52:18 http: TLS handshake error from 10.244.1.1:36766: EOF 2018/07/31 11:52:28 http: TLS handshake error from 10.244.1.1:36772: EOF 2018/07/31 11:52:38 http: TLS handshake error from 10.244.1.1:36778: EOF 2018/07/31 11:52:48 http: TLS handshake error from 10.244.1.1:36784: EOF 2018/07/31 11:52:58 http: TLS handshake error from 10.244.1.1:36790: EOF 2018/07/31 11:53:08 http: TLS handshake error from 10.244.1.1:36796: EOF 2018/07/31 11:53:18 http: TLS handshake error from 10.244.1.1:36802: EOF 2018/07/31 11:53:28 http: TLS handshake error from 10.244.1.1:36808: EOF 2018/07/31 11:53:38 http: TLS handshake error from 10.244.1.1:36814: EOF 2018/07/31 11:53:48 http: TLS handshake error from 10.244.1.1:36820: EOF 2018/07/31 11:53:58 http: TLS handshake error from 10.244.1.1:36826: EOF 2018/07/31 11:54:08 http: TLS handshake error from 10.244.1.1:36832: EOF 2018/07/31 11:54:18 http: TLS handshake error from 10.244.1.1:36838: EOF 2018/07/31 11:54:28 http: TLS handshake error from 10.244.1.1:36844: EOF 2018/07/31 11:54:38 http: TLS handshake error from 10.244.1.1:36850: EOF Pod name: virt-api-bcc6b587d-6thbp Pod phase: Running 2018/07/31 11:52:21 http: TLS handshake error from 10.244.0.1:53002: EOF 2018/07/31 11:52:31 http: TLS handshake error from 10.244.0.1:53062: EOF 2018/07/31 11:52:41 http: TLS handshake error from 10.244.0.1:53122: EOF 2018/07/31 11:52:51 http: TLS handshake error from 10.244.0.1:53182: EOF 2018/07/31 11:53:01 http: TLS handshake error from 10.244.0.1:53242: EOF 2018/07/31 11:53:11 http: TLS handshake error from 10.244.0.1:53302: EOF 2018/07/31 11:53:21 http: TLS handshake error from 10.244.0.1:53362: EOF 2018/07/31 11:53:31 http: TLS handshake error from 10.244.0.1:53422: EOF 2018/07/31 11:53:41 http: TLS handshake error from 10.244.0.1:53482: EOF 2018/07/31 11:53:51 http: TLS handshake error from 10.244.0.1:53542: EOF 2018/07/31 11:54:01 http: TLS handshake error from 10.244.0.1:53602: EOF 2018/07/31 11:54:11 http: TLS handshake error from 10.244.0.1:53662: EOF 2018/07/31 11:54:21 http: TLS handshake error from 10.244.0.1:53722: EOF 2018/07/31 11:54:31 http: TLS handshake error from 10.244.0.1:53784: EOF 2018/07/31 11:54:41 http: TLS handshake error from 10.244.0.1:53844: EOF Pod name: virt-controller-67dcdd8464-8wfdk Pod phase: Running level=info timestamp=2018-07-31T11:47:19.618128Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:47:19.635383Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitvkpx\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitvkpx" level=info timestamp=2018-07-31T11:47:20.463716Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmidd47v\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmidd47v" level=info timestamp=2018-07-31T11:49:55.480922Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:49:55.482653Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:49:55.603504Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmign52t\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmign52t" level=info timestamp=2018-07-31T11:50:31.220061Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:50:31.220566Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.410206Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:51:06.414117Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.637360Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:51:06.654677Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:54:27.557167Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:54:27.560533Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:54:27.840231Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi96nzf\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi96nzf" Pod name: virt-controller-67dcdd8464-bkbrt Pod phase: Running level=info timestamp=2018-07-31T11:52:03.227958Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-fsclq Pod phase: Running level=info timestamp=2018-07-31T11:50:48.992111Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=Domain uid=ed111071-94b7-11e8-b450-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-31T11:50:49.017437Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:50:49.018943Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: true\n" level=info timestamp=2018-07-31T11:50:49.019180Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-31T11:50:49.019223Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:50:49.019247Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:50:49.019310Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="No update processing required" level=info timestamp=2018-07-31T11:50:49.018294Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-31T11:50:49.048464Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:50:49.048556Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: true\n" level=info timestamp=2018-07-31T11:50:49.048585Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:50:49.048612Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:50:49.048629Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:50:49.048702Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:50:49.054159Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-t8hq9 Pod phase: Running level=info timestamp=2018-07-31T11:54:46.851717Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmi96nzf kind=Domain uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-31T11:54:46.932951Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-31T11:54:46.960336Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:54:46.960623Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi96nzf, existing: true\n" level=info timestamp=2018-07-31T11:54:46.960700Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-31T11:54:46.960777Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:54:46.960923Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:54:46.961085Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="No update processing required" level=info timestamp=2018-07-31T11:54:47.020913Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:54:47.021087Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi96nzf, existing: true\n" level=info timestamp=2018-07-31T11:54:47.021139Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:54:47.021203Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:54:47.021342Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:54:47.021495Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:54:47.049480Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." Pod name: netcatc7gnv Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatgstpw Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatk9hjt Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatlprg9 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatsb6tf Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatscptn Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' failed + echo failed + exit 1 Pod name: netcatw5chf Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: virt-launcher-testvmi4586l-hvgjb Pod phase: Failed level=info timestamp=2018-07-31T11:47:47.721442Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:48.764775Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:48.776318Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.793854Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 77fef69a-00e7-4069-9268-639c85a76022" level=info timestamp=2018-07-31T11:47:48.794310Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:49.580993Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:49.652361Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:49.653323Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:49.660425Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:49.660541Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:49.665374Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:49.682489Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:49.690281Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:49.704717Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:49.826672Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 77fef69a-00e7-4069-9268-639c85a76022: 176" Pod name: virt-launcher-testvmi72gzd-svj9m Pod phase: Failed level=info timestamp=2018-07-31T11:47:49.615097Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:51.280348Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:51.660252Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:51.687362Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID f447efba-fe55-4f2f-b26c-5811112a5680" level=info timestamp=2018-07-31T11:47:51.687954Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:52.474352Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:52.546414Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:52.547393Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:52.585612Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:52.595294Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:52.595528Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:52.663006Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:52.731209Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:52.732037Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:52.743747Z pos=monitor.go:222 component=virt-launcher msg="Found PID for f447efba-fe55-4f2f-b26c-5811112a5680: 184" Pod name: virt-launcher-testvmi96nzf-8mtbk Pod phase: Running level=info timestamp=2018-07-31T11:54:46.501628Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 168ecc20-0f4d-40ec-9fa7-ed3f73740f99" level=info timestamp=2018-07-31T11:54:46.502091Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:54:46.810874Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:54:46.847211Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:54:46.852706Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:54:46.866486Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:54:46.892805Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:54:46.904379Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:54:46.929941Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:54:46.935224Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:54:47.032501Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-07-31T11:54:47.033657Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-31T11:54:47.033770Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-31T11:54:47.048588Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:54:47.514142Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 168ecc20-0f4d-40ec-9fa7-ed3f73740f99: 161" Pod name: virt-launcher-testvmidd47v-wckn5 Pod phase: Failed level=info timestamp=2018-07-31T11:47:46.548045Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:47.990581Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:48.017694Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.028790Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 1b54d7e0-a160-4fb7-bbe0-80cfda2c008c" level=info timestamp=2018-07-31T11:47:48.029322Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:48.394830Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:48.462138Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:48.469182Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.469314Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:48.493830Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmidd47v kind= uid=7acd9f60-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:48.497060Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmidd47v kind= uid=7acd9f60-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:48.504027Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:48.511082Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.569223Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmidd47v kind= uid=7acd9f60-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:49.044284Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1b54d7e0-a160-4fb7-bbe0-80cfda2c008c: 180" Pod name: virt-launcher-testvmign52t-bds62 Pod phase: Failed level=info timestamp=2018-07-31T11:50:10.713251Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:50:11.477056Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:50:11.489077Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 542a685b-e09e-425b-ac70-07d29da7b7ec" level=info timestamp=2018-07-31T11:50:11.490040Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:50:11.495212Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:11.657973Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:50:11.675287Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:11.678583Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:11.681689Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:50:11.691352Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:50:11.694277Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:11.695367Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:11.698184Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:11.778002Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:12.498039Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 542a685b-e09e-425b-ac70-07d29da7b7ec: 160" Pod name: virt-launcher-testvmikqjnv-vd4k6 Pod phase: Running level=info timestamp=2018-07-31T11:50:47.763929Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:50:48.711816Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID ca4123e2-2dcc-421a-816d-a0697b02ba75" level=info timestamp=2018-07-31T11:50:48.712339Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:50:48.713071Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:50:48.731990Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:48.970349Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:50:48.986415Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:48.992572Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:48.998317Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:50:49.008110Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:50:49.010367Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:49.015329Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:49.021181Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:49.053707Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:49.720984Z pos=monitor.go:222 component=virt-launcher msg="Found PID for ca4123e2-2dcc-421a-816d-a0697b02ba75: 160" Pod name: virt-launcher-testvmitvkpx-vjt2m Pod phase: Failed level=info timestamp=2018-07-31T11:47:45.200752Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:45.902694Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:45.918793Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 0419eb6e-3c84-408e-9f29-4c3085314eae" level=info timestamp=2018-07-31T11:47:45.919151Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:45.919155Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:46.932668Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 0419eb6e-3c84-408e-9f29-4c3085314eae: 176" level=info timestamp=2018-07-31T11:47:46.979209Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:47.098475Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:47.099314Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:47.120081Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:47.120325Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:47.122135Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:47.169074Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:47.179872Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:47.307318Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmixs668-fmdmv Pod phase: Failed level=info timestamp=2018-07-31T11:51:12.095124Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-31T11:51:12.095536Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-31T11:51:12.101291Z pos=libvirt.go:261 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-31T11:51:22.113962Z pos=libvirt.go:276 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-31T11:51:22.244053Z pos=virt-launcher.go:146 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmixs668" level=info timestamp=2018-07-31T11:51:22.246457Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-31T11:51:22.247185Z pos=virt-launcher.go:63 component=virt-launcher msg="Marked as ready" level=error timestamp=2018-07-31T11:51:23.310132Z pos=common.go:126 component=virt-launcher msg="updated MAC for interface: eth0 - 0a:58:0a:72:74:a3" level=info timestamp=2018-07-31T11:51:23.319489Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-31T11:51:23.319692Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-31T11:51:23.324075Z pos=dhcp.go:62 component=virt-launcher msg="Starting SingleClientDHCPServer" level=info timestamp=2018-07-31T11:51:23.476041Z pos=manager.go:158 component=virt-launcher namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Domain defined." level=info timestamp=2018-07-31T11:51:23.477518Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" • Failure [22.593 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address and slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:427 should configure custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:428 Expected error: <*kubecli.AsyncSubresourceError | 0xc42081a000>: { err: "Can't connect to websocket (503): service unavailable\n\n", StatusCode: 503, } Can't connect to websocket (503): service unavailable not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1102 ------------------------------ STEP: checking eth0 MAC address level=info timestamp=2018-07-31T11:54:28.962984Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmi96nzf kind=VirtualMachineInstance uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi96nzf-8mtbk" level=info timestamp=2018-07-31T11:54:46.213008Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmi96nzf kind=VirtualMachineInstance uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi96nzf-8mtbk" level=info timestamp=2018-07-31T11:54:47.901185Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmi96nzf kind=VirtualMachineInstance uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-31T11:54:47.930379Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmi96nzf kind=VirtualMachineInstance uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="VirtualMachineInstance started." Pod name: disks-images-provider-p9txx Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-rt8kh Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-5tm27 Pod phase: Running 2018/07/31 11:52:38 http: TLS handshake error from 10.244.1.1:36778: EOF 2018/07/31 11:52:48 http: TLS handshake error from 10.244.1.1:36784: EOF 2018/07/31 11:52:58 http: TLS handshake error from 10.244.1.1:36790: EOF 2018/07/31 11:53:08 http: TLS handshake error from 10.244.1.1:36796: EOF 2018/07/31 11:53:18 http: TLS handshake error from 10.244.1.1:36802: EOF 2018/07/31 11:53:28 http: TLS handshake error from 10.244.1.1:36808: EOF 2018/07/31 11:53:38 http: TLS handshake error from 10.244.1.1:36814: EOF 2018/07/31 11:53:48 http: TLS handshake error from 10.244.1.1:36820: EOF 2018/07/31 11:53:58 http: TLS handshake error from 10.244.1.1:36826: EOF 2018/07/31 11:54:08 http: TLS handshake error from 10.244.1.1:36832: EOF 2018/07/31 11:54:18 http: TLS handshake error from 10.244.1.1:36838: EOF 2018/07/31 11:54:28 http: TLS handshake error from 10.244.1.1:36844: EOF 2018/07/31 11:54:38 http: TLS handshake error from 10.244.1.1:36850: EOF 2018/07/31 11:54:48 http: TLS handshake error from 10.244.1.1:36856: EOF 2018/07/31 11:54:58 http: TLS handshake error from 10.244.1.1:36862: EOF Pod name: virt-api-bcc6b587d-6thbp Pod phase: Running 2018/07/31 11:52:41 http: TLS handshake error from 10.244.0.1:53122: EOF 2018/07/31 11:52:51 http: TLS handshake error from 10.244.0.1:53182: EOF 2018/07/31 11:53:01 http: TLS handshake error from 10.244.0.1:53242: EOF 2018/07/31 11:53:11 http: TLS handshake error from 10.244.0.1:53302: EOF 2018/07/31 11:53:21 http: TLS handshake error from 10.244.0.1:53362: EOF 2018/07/31 11:53:31 http: TLS handshake error from 10.244.0.1:53422: EOF 2018/07/31 11:53:41 http: TLS handshake error from 10.244.0.1:53482: EOF 2018/07/31 11:53:51 http: TLS handshake error from 10.244.0.1:53542: EOF 2018/07/31 11:54:01 http: TLS handshake error from 10.244.0.1:53602: EOF 2018/07/31 11:54:11 http: TLS handshake error from 10.244.0.1:53662: EOF 2018/07/31 11:54:21 http: TLS handshake error from 10.244.0.1:53722: EOF 2018/07/31 11:54:31 http: TLS handshake error from 10.244.0.1:53784: EOF 2018/07/31 11:54:41 http: TLS handshake error from 10.244.0.1:53844: EOF 2018/07/31 11:54:51 http: TLS handshake error from 10.244.0.1:53904: EOF 2018/07/31 11:55:01 http: TLS handshake error from 10.244.0.1:53964: EOF Pod name: virt-controller-67dcdd8464-8wfdk Pod phase: Running level=info timestamp=2018-07-31T11:47:20.463716Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmidd47v\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmidd47v" level=info timestamp=2018-07-31T11:49:55.480922Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:49:55.482653Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:49:55.603504Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmign52t\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmign52t" level=info timestamp=2018-07-31T11:50:31.220061Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:50:31.220566Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.410206Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:51:06.414117Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.637360Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:51:06.654677Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:54:27.557167Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:54:27.560533Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:54:27.840231Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi96nzf\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi96nzf" level=info timestamp=2018-07-31T11:54:50.126624Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:54:50.128600Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-bkbrt Pod phase: Running level=info timestamp=2018-07-31T11:52:03.227958Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-fsclq Pod phase: Running level=info timestamp=2018-07-31T11:50:48.992111Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=Domain uid=ed111071-94b7-11e8-b450-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-31T11:50:49.017437Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:50:49.018943Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: true\n" level=info timestamp=2018-07-31T11:50:49.019180Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-31T11:50:49.019223Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:50:49.019247Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:50:49.019310Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="No update processing required" level=info timestamp=2018-07-31T11:50:49.018294Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-31T11:50:49.048464Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:50:49.048556Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: true\n" level=info timestamp=2018-07-31T11:50:49.048585Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:50:49.048612Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:50:49.048629Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:50:49.048702Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:50:49.054159Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-t8hq9 Pod phase: Running level=info timestamp=2018-07-31T11:55:06.865659Z pos=vm.go:756 component=virt-handler namespace=kubevirt-test-default name=testvmi4zvnb kind=Domain uid=875f766e-94b8-11e8-b450-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-31T11:55:07.012735Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:55:07.012863Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4zvnb, existing: true\n" level=info timestamp=2018-07-31T11:55:07.012887Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Scheduled\n" level=info timestamp=2018-07-31T11:55:07.012914Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:07.012938Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:07.012991Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="No update processing required" level=info timestamp=2018-07-31T11:55:07.016957Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-31T11:55:07.038461Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:55:07.038646Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmi4zvnb, existing: true\n" level=info timestamp=2018-07-31T11:55:07.038696Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:55:07.038816Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:07.038907Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:07.039138Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:55:07.054677Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." Pod name: netcatc7gnv Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatgstpw Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatk9hjt Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatlprg9 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatsb6tf Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.39 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatscptn Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' failed + echo failed + exit 1 Pod name: netcatw5chf Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: virt-launcher-testvmi4586l-hvgjb Pod phase: Failed level=info timestamp=2018-07-31T11:47:47.721442Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:48.764775Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:48.776318Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.793854Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 77fef69a-00e7-4069-9268-639c85a76022" level=info timestamp=2018-07-31T11:47:48.794310Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:49.580993Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:49.652361Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:49.653323Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:49.660425Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:49.660541Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:49.665374Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:49.682489Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:49.690281Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:49.704717Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4586l kind= uid=7ad045f6-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:49.826672Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 77fef69a-00e7-4069-9268-639c85a76022: 176" Pod name: virt-launcher-testvmi4zvnb-bgtnk Pod phase: Running level=info timestamp=2018-07-31T11:55:05.798213Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:55:06.585635Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:55:06.596576Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:06.615098Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID b329c23b-fd19-4328-bf24-7d19ad26b65c" level=info timestamp=2018-07-31T11:55:06.616698Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:55:06.794508Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:55:06.856401Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:55:06.871328Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:06.905665Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:55:07.000442Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:55:07.005212Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:07.008686Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:55:07.017517Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:07.050554Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:07.634488Z pos=monitor.go:222 component=virt-launcher msg="Found PID for b329c23b-fd19-4328-bf24-7d19ad26b65c: 160" Pod name: virt-launcher-testvmi72gzd-svj9m Pod phase: Failed level=info timestamp=2018-07-31T11:47:49.615097Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:51.280348Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:51.660252Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:51.687362Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID f447efba-fe55-4f2f-b26c-5811112a5680" level=info timestamp=2018-07-31T11:47:51.687954Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:52.474352Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:52.546414Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:52.547393Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:52.585612Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:52.595294Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:52.595528Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:52.663006Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:52.731209Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:52.732037Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi72gzd kind= uid=7ad5ef4b-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:52.743747Z pos=monitor.go:222 component=virt-launcher msg="Found PID for f447efba-fe55-4f2f-b26c-5811112a5680: 184" Pod name: virt-launcher-testvmi96nzf-8mtbk Pod phase: Running level=info timestamp=2018-07-31T11:54:46.501628Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 168ecc20-0f4d-40ec-9fa7-ed3f73740f99" level=info timestamp=2018-07-31T11:54:46.502091Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:54:46.810874Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:54:46.847211Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:54:46.852706Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:54:46.866486Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:54:46.892805Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:54:46.904379Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:54:46.929941Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:54:46.935224Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:54:47.032501Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-07-31T11:54:47.033657Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-31T11:54:47.033770Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-31T11:54:47.048588Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:54:47.514142Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 168ecc20-0f4d-40ec-9fa7-ed3f73740f99: 161" Pod name: virt-launcher-testvmidd47v-wckn5 Pod phase: Failed level=info timestamp=2018-07-31T11:47:46.548045Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:47.990581Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:48.017694Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.028790Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 1b54d7e0-a160-4fb7-bbe0-80cfda2c008c" level=info timestamp=2018-07-31T11:47:48.029322Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:48.394830Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:48.462138Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:48.469182Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.469314Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:48.493830Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmidd47v kind= uid=7acd9f60-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:48.497060Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmidd47v kind= uid=7acd9f60-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:48.504027Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:48.511082Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:48.569223Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmidd47v kind= uid=7acd9f60-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:49.044284Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1b54d7e0-a160-4fb7-bbe0-80cfda2c008c: 180" Pod name: virt-launcher-testvmign52t-bds62 Pod phase: Failed level=info timestamp=2018-07-31T11:50:10.713251Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:50:11.477056Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:50:11.489077Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 542a685b-e09e-425b-ac70-07d29da7b7ec" level=info timestamp=2018-07-31T11:50:11.490040Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:50:11.495212Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:11.657973Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:50:11.675287Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:11.678583Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:11.681689Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:50:11.691352Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:50:11.694277Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:11.695367Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:11.698184Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:11.778002Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:12.498039Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 542a685b-e09e-425b-ac70-07d29da7b7ec: 160" Pod name: virt-launcher-testvmikqjnv-vd4k6 Pod phase: Running level=info timestamp=2018-07-31T11:50:47.763929Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:50:48.711816Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID ca4123e2-2dcc-421a-816d-a0697b02ba75" level=info timestamp=2018-07-31T11:50:48.712339Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:50:48.713071Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:50:48.731990Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:48.970349Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:50:48.986415Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:48.992572Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:48.998317Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:50:49.008110Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:50:49.010367Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:49.015329Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:50:49.021181Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:50:49.053707Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:50:49.720984Z pos=monitor.go:222 component=virt-launcher msg="Found PID for ca4123e2-2dcc-421a-816d-a0697b02ba75: 160" Pod name: virt-launcher-testvmitvkpx-vjt2m Pod phase: Failed level=info timestamp=2018-07-31T11:47:45.200752Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-31T11:47:45.902694Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:47:45.918793Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID 0419eb6e-3c84-408e-9f29-4c3085314eae" level=info timestamp=2018-07-31T11:47:45.919151Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:45.919155Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:47:46.932668Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 0419eb6e-3c84-408e-9f29-4c3085314eae: 176" level=info timestamp=2018-07-31T11:47:46.979209Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:47:47.098475Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:47:47.099314Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:47.120081Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:47.120325Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:47:47.122135Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:47:47.169074Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:47:47.179872Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:47:47.307318Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmitvkpx kind= uid=7acbd9a3-94b7-11e8-b450-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmixs668-fmdmv Pod phase: Failed level=info timestamp=2018-07-31T11:51:12.095124Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-31T11:51:12.095536Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-31T11:51:12.101291Z pos=libvirt.go:261 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-31T11:51:22.113962Z pos=libvirt.go:276 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-31T11:51:22.244053Z pos=virt-launcher.go:146 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmixs668" level=info timestamp=2018-07-31T11:51:22.246457Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-31T11:51:22.247185Z pos=virt-launcher.go:63 component=virt-launcher msg="Marked as ready" level=error timestamp=2018-07-31T11:51:23.310132Z pos=common.go:126 component=virt-launcher msg="updated MAC for interface: eth0 - 0a:58:0a:72:74:a3" level=info timestamp=2018-07-31T11:51:23.319489Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-31T11:51:23.319692Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-31T11:51:23.324075Z pos=dhcp.go:62 component=virt-launcher msg="Starting SingleClientDHCPServer" level=info timestamp=2018-07-31T11:51:23.476041Z pos=manager.go:158 component=virt-launcher namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Domain defined." level=info timestamp=2018-07-31T11:51:23.477518Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" • Failure [20.241 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with disabled automatic attachment of interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:440 should not configure any external interfaces [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:441 Expected error: <*kubecli.AsyncSubresourceError | 0xc420840fe0>: { err: "Can't connect to websocket (503): service unavailable\n\n", StatusCode: 503, } Can't connect to websocket (503): service unavailable not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1102 ------------------------------ STEP: checking loopback is the only guest interface level=info timestamp=2018-07-31T11:54:51.375066Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmi4zvnb kind=VirtualMachineInstance uid=875f766e-94b8-11e8-b450-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi4zvnb-bgtnk" level=info timestamp=2018-07-31T11:55:06.401306Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmi4zvnb kind=VirtualMachineInstance uid=875f766e-94b8-11e8-b450-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi4zvnb-bgtnk" level=info timestamp=2018-07-31T11:55:07.903035Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmi4zvnb kind=VirtualMachineInstance uid=875f766e-94b8-11e8-b450-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-31T11:55:07.917934Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmi4zvnb kind=VirtualMachineInstance uid=875f766e-94b8-11e8-b450-525500d15501 msg="VirtualMachineInstance started." Pod name: disks-images-provider-p9txx Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-rt8kh Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-5tm27 Pod phase: Running 2018/07/31 11:53:08 http: TLS handshake error from 10.244.1.1:36796: EOF 2018/07/31 11:53:18 http: TLS handshake error from 10.244.1.1:36802: EOF 2018/07/31 11:53:28 http: TLS handshake error from 10.244.1.1:36808: EOF 2018/07/31 11:53:38 http: TLS handshake error from 10.244.1.1:36814: EOF 2018/07/31 11:53:48 http: TLS handshake error from 10.244.1.1:36820: EOF 2018/07/31 11:53:58 http: TLS handshake error from 10.244.1.1:36826: EOF 2018/07/31 11:54:08 http: TLS handshake error from 10.244.1.1:36832: EOF 2018/07/31 11:54:18 http: TLS handshake error from 10.244.1.1:36838: EOF 2018/07/31 11:54:28 http: TLS handshake error from 10.244.1.1:36844: EOF 2018/07/31 11:54:38 http: TLS handshake error from 10.244.1.1:36850: EOF 2018/07/31 11:54:48 http: TLS handshake error from 10.244.1.1:36856: EOF 2018/07/31 11:54:58 http: TLS handshake error from 10.244.1.1:36862: EOF 2018/07/31 11:55:08 http: TLS handshake error from 10.244.1.1:36868: EOF 2018/07/31 11:55:18 http: TLS handshake error from 10.244.1.1:36874: EOF 2018/07/31 11:55:28 http: TLS handshake error from 10.244.1.1:36880: EOF Pod name: virt-api-bcc6b587d-6thbp Pod phase: Running 2018/07/31 11:53:41 http: TLS handshake error from 10.244.0.1:53482: EOF 2018/07/31 11:53:51 http: TLS handshake error from 10.244.0.1:53542: EOF 2018/07/31 11:54:01 http: TLS handshake error from 10.244.0.1:53602: EOF 2018/07/31 11:54:11 http: TLS handshake error from 10.244.0.1:53662: EOF 2018/07/31 11:54:21 http: TLS handshake error from 10.244.0.1:53722: EOF 2018/07/31 11:54:31 http: TLS handshake error from 10.244.0.1:53784: EOF 2018/07/31 11:54:41 http: TLS handshake error from 10.244.0.1:53844: EOF 2018/07/31 11:54:51 http: TLS handshake error from 10.244.0.1:53904: EOF 2018/07/31 11:55:01 http: TLS handshake error from 10.244.0.1:53964: EOF 2018/07/31 11:55:11 http: TLS handshake error from 10.244.0.1:54024: EOF level=error timestamp=2018-07-31T11:55:12.023127Z pos=subresource.go:85 component=virt-api msg= 2018/07/31 11:55:12 http: response.WriteHeader on hijacked connection level=info timestamp=2018-07-31T11:55:12.031943Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmikqjnv/console proto=HTTP/1.1 statusCode=200 contentLength=0 level=error timestamp=2018-07-31T11:55:12.023686Z pos=subresource.go:97 component=virt-api reason="read tcp 10.244.0.8:8443->10.244.0.1:48542: use of closed network connection" msg="error ecountered reading from websocket stream" 2018/07/31 11:55:21 http: TLS handshake error from 10.244.0.1:54084: EOF Pod name: virt-controller-67dcdd8464-8wfdk Pod phase: Running level=info timestamp=2018-07-31T11:49:55.482653Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:49:55.603504Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmign52t\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmign52t" level=info timestamp=2018-07-31T11:50:31.220061Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:50:31.220566Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.410206Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:51:06.414117Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.637360Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:51:06.654677Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:54:27.557167Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:54:27.560533Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:54:27.840231Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi96nzf\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi96nzf" level=info timestamp=2018-07-31T11:54:50.126624Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:54:50.128600Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:55:12.119534Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:55:12.121605Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-bkbrt Pod phase: Running level=info timestamp=2018-07-31T11:52:03.227958Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-fsclq Pod phase: Running level=info timestamp=2018-07-31T11:55:24.440217Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:24.440310Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:24.440558Z pos=vm.go:353 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-31T11:55:24.440685Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Processing shutdown." level=error timestamp=2018-07-31T11:55:24.441585Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-31T11:55:24.441738Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmikqjnv" level=info timestamp=2018-07-31T11:55:29.183329Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-31T11:55:29.185631Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: false\n" level=info timestamp=2018-07-31T11:55:29.186369Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:55:29.186587Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-31T11:55:29.188758Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:55:29.191311Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: false\n" level=info timestamp=2018-07-31T11:55:29.193032Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:55:29.193257Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-31T11:55:29.193416Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-t8hq9 Pod phase: Running level=info timestamp=2018-07-31T11:55:30.287781Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="No update processing required" level=error timestamp=2018-07-31T11:55:30.307386Z pos=vm.go:431 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijvtvp\": the object has been modified; please apply your changes to the latest version and try again" msg="Updating the VirtualMachineInstance status failed." level=info timestamp=2018-07-31T11:55:30.307603Z pos=vm.go:251 component=virt-handler reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijvtvp\": the object has been modified; please apply your changes to the latest version and try again" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmijvtvp" level=info timestamp=2018-07-31T11:55:30.307953Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijvtvp, existing: true\n" level=info timestamp=2018-07-31T11:55:30.308007Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:55:30.308070Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:30.308117Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:30.311490Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:55:30.322132Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:55:30.322359Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijvtvp, existing: true\n" level=info timestamp=2018-07-31T11:55:30.322414Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:55:30.322485Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:30.322531Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:30.322688Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:55:30.334492Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmijvtvp-t5wrb Pod phase: Running level=info timestamp=2018-07-31T11:55:29.379956Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:55:29.384995Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:29.611298Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID dbccae8f-d0be-4174-a4d8-3fbf6a3c84d1" level=info timestamp=2018-07-31T11:55:29.612124Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:55:30.163651Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:55:30.199225Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:55:30.208122Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:30.227835Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:55:30.228955Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:55:30.240579Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:30.271339Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:55:30.276925Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:30.321229Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:30.333048Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:30.618864Z pos=monitor.go:222 component=virt-launcher msg="Found PID for dbccae8f-d0be-4174-a4d8-3fbf6a3c84d1: 154" • Failure [20.302 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device [It] /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 Expected error: <*kubecli.AsyncSubresourceError | 0xc42067b9e0>: { err: "Can't connect to websocket (503): service unavailable\n\n", StatusCode: 503, } Can't connect to websocket (503): service unavailable not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:109 ------------------------------ level=info timestamp=2018-07-31T11:55:13.859320Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmijvtvp kind=VirtualMachineInstance uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Created virtual machine pod virt-launcher-testvmijvtvp-t5wrb" level=info timestamp=2018-07-31T11:55:29.428812Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmijvtvp kind=VirtualMachineInstance uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmijvtvp-t5wrb" level=info timestamp=2018-07-31T11:55:31.177871Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmijvtvp kind=VirtualMachineInstance uid=947f8a39-94b8-11e8-b450-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-31T11:55:31.195857Z pos=utils.go:245 component=tests namespace=kubevirt-test-default name=testvmijvtvp kind=VirtualMachineInstance uid=947f8a39-94b8-11e8-b450-525500d15501 msg="VirtualMachineInstance started." STEP: Reading from the VNC socket Pod name: disks-images-provider-p9txx Pod phase: Running Pod name: disks-images-provider-p9txx Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-rt8kh Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-5tm27 Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-rt8kh Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-5tm27 Pod phase: Running 2018/07/31 11:53:08 http: TLS handshake error from 10.244.1.1:36796: EOF 2018/07/31 11:53:18 http: TLS handshake error from 10.244.1.1:36802: EOF 2018/07/31 11:53:28 http: TLS handshake error from 10.244.1.1:36808: EOF 2018/07/31 11:53:38 http: TLS handshake error from 10.244.1.1:36814: EOF 2018/07/31 11:53:48 http: TLS handshake error from 10.244.1.1:36820: EOF 2018/07/31 11:53:58 http: TLS handshake error from 10.244.1.1:36826: EOF 2018/07/31 11:54:08 http: TLS handshake error from 10.244.1.1:36832: EOF 2018/07/31 11:54:18 http: TLS handshake error from 10.244.1.1:36838: EOF 2018/07/31 11:54:28 http: TLS handshake error from 10.244.1.1:36844: EOF 2018/07/31 11:54:38 http: TLS handshake error from 10.244.1.1:36850: EOF 2018/07/31 11:54:48 http: TLS handshake error from 10.244.1.1:36856: EOF 2018/07/31 11:54:58 http: TLS handshake error from 10.244.1.1:36862: EOF 2018/07/31 11:55:08 http: TLS handshake error from 10.244.1.1:36868: EOF 2018/07/31 11:55:18 http: TLS handshake error from 10.244.1.1:36874: EOF 2018/07/31 11:55:28 http: TLS handshake error from 10.244.1.1:36880: EOF Pod name: virt-api-bcc6b587d-6thbp Pod phase: Running 2018/07/31 11:53:08 http: TLS handshake error from 10.244.1.1:36796: EOF 2018/07/31 11:53:18 http: TLS handshake error from 10.244.1.1:36802: EOF 2018/07/31 11:53:28 http: TLS handshake error from 10.244.1.1:36808: EOF 2018/07/31 11:53:38 http: TLS handshake error from 10.244.1.1:36814: EOF 2018/07/31 11:53:48 http: TLS handshake error from 10.244.1.1:36820: EOF 2018/07/31 11:53:58 http: TLS handshake error from 10.244.1.1:36826: EOF 2018/07/31 11:54:08 http: TLS handshake error from 10.244.1.1:36832: EOF 2018/07/31 11:54:18 http: TLS handshake error from 10.244.1.1:36838: EOF 2018/07/31 11:54:28 http: TLS handshake error from 10.244.1.1:36844: EOF 2018/07/31 11:54:38 http: TLS handshake error from 10.244.1.1:36850: EOF 2018/07/31 11:54:48 http: TLS handshake error from 10.244.1.1:36856: EOF 2018/07/31 11:54:58 http: TLS handshake error from 10.244.1.1:36862: EOF 2018/07/31 11:55:08 http: TLS handshake error from 10.244.1.1:36868: EOF 2018/07/31 11:55:18 http: TLS handshake error from 10.244.1.1:36874: EOF 2018/07/31 11:55:28 http: TLS handshake error from 10.244.1.1:36880: EOF Pod name: virt-api-bcc6b587d-6thbp Pod phase: Running 2018/07/31 11:53:41 http: TLS handshake error from 10.244.0.1:53482: EOF 2018/07/31 11:53:51 http: TLS handshake error from 10.244.0.1:53542: EOF 2018/07/31 11:54:01 http: TLS handshake error from 10.244.0.1:53602: EOF 2018/07/31 11:54:11 http: TLS handshake error from 10.244.0.1:53662: EOF 2018/07/31 11:54:21 http: TLS handshake error from 10.244.0.1:53722: EOF 2018/07/31 11:54:31 http: TLS handshake error from 10.244.0.1:53784: EOF 2018/07/31 11:54:41 http: TLS handshake error from 10.244.0.1:53844: EOF 2018/07/31 11:54:51 http: TLS handshake error from 10.244.0.1:53904: EOF 2018/07/31 11:55:01 http: TLS handshake error from 10.244.0.1:53964: EOF 2018/07/31 11:55:11 http: TLS handshake error from 10.244.0.1:54024: EOF level=error timestamp=2018-07-31T11:55:12.023127Z pos=subresource.go:85 component=virt-api msg= 2018/07/31 11:55:12 http: response.WriteHeader on hijacked connection level=info timestamp=2018-07-31T11:55:12.031943Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmikqjnv/console proto=HTTP/1.1 statusCode=200 contentLength=0 level=error timestamp=2018-07-31T11:55:12.023686Z pos=subresource.go:97 component=virt-api reason="read tcp 10.244.0.8:8443->10.244.0.1:48542: use of closed network connection" msg="error ecountered reading from websocket stream" 2018/07/31 11:55:21 http: TLS handshake error from 10.244.0.1:54084: EOF Pod name: virt-controller-67dcdd8464-8wfdk Pod phase: Running 2018/07/31 11:53:41 http: TLS handshake error from 10.244.0.1:53482: EOF 2018/07/31 11:53:51 http: TLS handshake error from 10.244.0.1:53542: EOF 2018/07/31 11:54:01 http: TLS handshake error from 10.244.0.1:53602: EOF 2018/07/31 11:54:11 http: TLS handshake error from 10.244.0.1:53662: EOF 2018/07/31 11:54:21 http: TLS handshake error from 10.244.0.1:53722: EOF 2018/07/31 11:54:31 http: TLS handshake error from 10.244.0.1:53784: EOF 2018/07/31 11:54:41 http: TLS handshake error from 10.244.0.1:53844: EOF 2018/07/31 11:54:51 http: TLS handshake error from 10.244.0.1:53904: EOF 2018/07/31 11:55:01 http: TLS handshake error from 10.244.0.1:53964: EOF 2018/07/31 11:55:11 http: TLS handshake error from 10.244.0.1:54024: EOF level=error timestamp=2018-07-31T11:55:12.023127Z pos=subresource.go:85 component=virt-api msg= 2018/07/31 11:55:12 http: response.WriteHeader on hijacked connection level=info timestamp=2018-07-31T11:55:12.031943Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmikqjnv/console proto=HTTP/1.1 statusCode=200 contentLength=0 level=error timestamp=2018-07-31T11:55:12.023686Z pos=subresource.go:97 component=virt-api reason="read tcp 10.244.0.8:8443->10.244.0.1:48542: use of closed network connection" msg="error ecountered reading from websocket stream" 2018/07/31 11:55:21 http: TLS handshake error from 10.244.0.1:54084: EOF Pod name: virt-controller-67dcdd8464-8wfdk Pod phase: Running level=info timestamp=2018-07-31T11:49:55.482653Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:49:55.603504Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmign52t\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmign52t" level=info timestamp=2018-07-31T11:50:31.220061Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:50:31.220566Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.410206Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:51:06.414117Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.637360Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:51:06.654677Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:54:27.557167Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:54:27.560533Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:54:27.840231Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi96nzf\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi96nzf" level=info timestamp=2018-07-31T11:54:50.126624Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:54:50.128600Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:55:12.119534Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:55:12.121605Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-bkbrt Pod phase: Running level=info timestamp=2018-07-31T11:49:55.482653Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmign52t kind= uid=d7bfe1e0-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:49:55.603504Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmign52t\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmign52t" level=info timestamp=2018-07-31T11:50:31.220061Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:50:31.220566Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikqjnv kind= uid=ed111071-94b7-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.410206Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:51:06.414117Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixs668 kind= uid=01ff8b29-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:51:06.637360Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:51:06.654677Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixs668\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixs668" level=info timestamp=2018-07-31T11:54:27.557167Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:54:27.560533Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi96nzf kind= uid=79e7ca0b-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:54:27.840231Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi96nzf\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi96nzf" level=info timestamp=2018-07-31T11:54:50.126624Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:54:50.128600Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4zvnb kind= uid=875f766e-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-31T11:55:12.119534Z pos=preset.go:142 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-31T11:55:12.121605Z pos=preset.go:171 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-bkbrt Pod phase: Running level=info timestamp=2018-07-31T11:52:03.227958Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-fsclq Pod phase: Running level=info timestamp=2018-07-31T11:52:03.227958Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-fsclq Pod phase: Running level=info timestamp=2018-07-31T11:55:24.440217Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:24.440310Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:24.440558Z pos=vm.go:353 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-31T11:55:24.440685Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Processing shutdown." level=error timestamp=2018-07-31T11:55:24.441585Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-31T11:55:24.441738Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmikqjnv" level=info timestamp=2018-07-31T11:55:29.183329Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-31T11:55:29.185631Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: false\n" level=info timestamp=2018-07-31T11:55:29.186369Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:55:29.186587Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-31T11:55:29.188758Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:55:29.191311Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: false\n" level=info timestamp=2018-07-31T11:55:29.193032Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:55:29.193257Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-31T11:55:29.193416Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-t8hq9 Pod phase: Running level=info timestamp=2018-07-31T11:55:24.440217Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:24.440310Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:24.440558Z pos=vm.go:353 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-31T11:55:24.440685Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Processing shutdown." level=error timestamp=2018-07-31T11:55:24.441585Z pos=vm.go:424 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= reason="connection is shut down" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-31T11:55:24.441738Z pos=vm.go:251 component=virt-handler reason="connection is shut down" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmikqjnv" level=info timestamp=2018-07-31T11:55:29.183329Z pos=vm.go:746 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-31T11:55:29.185631Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: false\n" level=info timestamp=2018-07-31T11:55:29.186369Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:55:29.186587Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-31T11:55:29.188758Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:55:29.191311Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmikqjnv, existing: false\n" level=info timestamp=2018-07-31T11:55:29.193032Z pos=vm.go:329 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-07-31T11:55:29.193257Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-31T11:55:29.193416Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmikqjnv kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-t8hq9 Pod phase: Running level=info timestamp=2018-07-31T11:55:30.287781Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="No update processing required" level=error timestamp=2018-07-31T11:55:30.307386Z pos=vm.go:431 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijvtvp\": the object has been modified; please apply your changes to the latest version and try again" msg="Updating the VirtualMachineInstance status failed." level=info timestamp=2018-07-31T11:55:30.307603Z pos=vm.go:251 component=virt-handler reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijvtvp\": the object has been modified; please apply your changes to the latest version and try again" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmijvtvp" level=info timestamp=2018-07-31T11:55:30.307953Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijvtvp, existing: true\n" level=info timestamp=2018-07-31T11:55:30.308007Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:55:30.308070Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:30.308117Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:30.311490Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:55:30.322132Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:55:30.322359Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijvtvp, existing: true\n" level=info timestamp=2018-07-31T11:55:30.322414Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:55:30.322485Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:30.322531Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:30.322688Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:55:30.334492Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:55:30.287781Z pos=vm.go:419 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="No update processing required" level=error timestamp=2018-07-31T11:55:30.307386Z pos=vm.go:431 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijvtvp\": the object has been modified; please apply your changes to the latest version and try again" msg="Updating the VirtualMachineInstance status failed." level=info timestamp=2018-07-31T11:55:30.307603Z pos=vm.go:251 component=virt-handler reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijvtvp\": the object has been modified; please apply your changes to the latest version and try again" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmijvtvp" level=info timestamp=2018-07-31T11:55:30.307953Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijvtvp, existing: true\n" level=info timestamp=2018-07-31T11:55:30.308007Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:55:30.308070Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:30.308117Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:30.311490Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:55:30.322132Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-31T11:55:30.322359Z pos=vm.go:313 component=virt-handler msg="Processing vmi testvmijvtvp, existing: true\n" level=info timestamp=2018-07-31T11:55:30.322414Z pos=vm.go:315 component=virt-handler msg="vmi is in phase: Running\n" level=info timestamp=2018-07-31T11:55:30.322485Z pos=vm.go:329 component=virt-handler msg="Domain: existing: true\n" level=info timestamp=2018-07-31T11:55:30.322531Z pos=vm.go:331 component=virt-handler msg="Domain status: Running, reason: Unknown\n" level=info timestamp=2018-07-31T11:55:30.322688Z pos=vm.go:416 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-31T11:55:30.334492Z pos=vm.go:440 component=virt-handler namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmijvtvp-t5wrb Pod phase: Running Pod name: virt-launcher-testvmijvtvp-t5wrb Pod phase: Running level=info timestamp=2018-07-31T11:55:29.379956Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:55:29.384995Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:29.611298Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID dbccae8f-d0be-4174-a4d8-3fbf6a3c84d1" level=info timestamp=2018-07-31T11:55:29.612124Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:55:30.163651Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:55:30.199225Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:55:30.208122Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:30.227835Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:55:30.228955Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:55:30.240579Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:30.271339Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:55:30.276925Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:30.321229Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:30.333048Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:30.618864Z pos=monitor.go:222 component=virt-launcher msg="Found PID for dbccae8f-d0be-4174-a4d8-3fbf6a3c84d1: 154" • Failure [0.449 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 should upgrade subresource connections if an origin header is given /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 for vnc [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 websocket: bad handshake: 503 Expected error: <*errors.errorString | 0xc420450670>: { s: "websocket: bad handshake", } websocket: bad handshake not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:151 ------------------------------ level=info timestamp=2018-07-31T11:55:29.379956Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-31T11:55:29.384995Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:29.611298Z pos=virt-launcher.go:217 component=virt-launcher msg="Detected domain with UUID dbccae8f-d0be-4174-a4d8-3fbf6a3c84d1" level=info timestamp=2018-07-31T11:55:29.612124Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-31T11:55:30.163651Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-31T11:55:30.199225Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:55:30.208122Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:30.227835Z pos=manager.go:189 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Domain started." level=info timestamp=2018-07-31T11:55:30.228955Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-31T11:55:30.240579Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:30.271339Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-31T11:55:30.276925Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-31T11:55:30.321229Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:30.333048Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijvtvp kind= uid=947f8a39-94b8-11e8-b450-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-31T11:55:30.618864Z pos=monitor.go:222 component=virt-launcher msg="Found PID for dbccae8f-d0be-4174-a4d8-3fbf6a3c84d1: 154" panic: Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. goroutine 2759 [running]: kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.Fail(0xc4200502c0, 0xa4, 0xc420b614f8, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:262 +0xda kubevirt.io/kubevirt/tests.KubevirtFailHandler(0xc4200502c0, 0xa4, 0xc420b614f8, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1676 +0x6c1 kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc420799e00, 0x14c4b60, 0x1d7c938, 0x0, 0x0, 0x0, 0x0, 0x1d7c938) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:69 +0x1ed kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).ToNot(0xc420799e00, 0x14c4b60, 0x1d7c938, 0x0, 0x0, 0x0, 0xc420799e00) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:39 +0xae kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1.2(0xc420b30088, 0xc420bc4660) /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:94 +0x153 created by kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1 /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:88 +0x3df make: *** [functest] Error 2 + make cluster-down ./cluster/down.sh