+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release + [[ k8s-1.11.0-release =~ openshift-.* ]] + [[ k8s-1.11.0-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/08/06 11:45:59 Waiting for host: 192.168.66.101:22 2018/08/06 11:46:02 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/06 11:46:10 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/08/06 11:46:15 Connected to tcp://192.168.66.101:22 ++ systemctl status docker ++ grep active ++ wc -l + [[ 1 -eq 0 ]] + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0806 11:46:15.900786 1256 feature_gate.go:230] feature gates: &{map[]} I0806 11:46:16.001242 1256 kernel_validator.go:81] Validating kernel version I0806 11:46:16.001516 1256 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 57.005117 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:293b5d249115ca127257612b9846c7b3c79a587b3a7ee924caa78fc26a9b3bf7 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted + kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f /tmp/local-volume.yaml storageclass.storage.k8s.io/local created configmap/local-storage-config created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding created clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding created role.rbac.authorization.k8s.io/local-storage-provisioner-jobs-role created rolebinding.rbac.authorization.k8s.io/local-storage-provisioner-jobs-rolebinding created serviceaccount/local-storage-admin created daemonset.extensions/local-volume-provisioner created 2018/08/06 11:47:32 Waiting for host: 192.168.66.102:22 2018/08/06 11:47:35 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/06 11:47:43 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/08/06 11:47:48 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/08/06 11:47:53 Connected to tcp://192.168.66.102:22 ++ grep active ++ systemctl status docker ++ wc -l + [[ 1 -eq 0 ]] + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0806 11:47:54.374497 1258 kernel_validator.go:81] Validating kernel version I0806 11:47:54.374695 1258 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 1m v1.11.0 node02 Ready 30s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 1m v1.11.0 node02 Ready 31s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33294/kubevirt/virt-controller:devel Untagged: localhost:33294/kubevirt/virt-controller@sha256:af9d426c3206ad9d8e0abcb56dc46f4da789c1465fb78f8f7c5ad52144d9c7b3 Deleted: sha256:fc2f4b417cd9c91a0089b7458f07b12f09a5bffd6a25c9f0c08454eea4c9eb82 Deleted: sha256:2f261a212306f6a63b4f4c7c63fce5e87d9d21d71a1fcdc5eaf709b6ee468be6 Deleted: sha256:c2785354f05a7da6b9556d7b5fdf638bf0785f95779cc2ff0778f15ab53f7e68 Deleted: sha256:2b599bf784d5bee40e8a5d8c533b2117ca632d3b869d52408dda80b58c3deda0 Untagged: localhost:33294/kubevirt/virt-launcher:devel Untagged: localhost:33294/kubevirt/virt-launcher@sha256:18d86fcb0cab32e94444ac92afb0a3b99b525fb398bf7c4de6006de23161eef2 Deleted: sha256:d04f8a3d82dceca875fd06fbcc07479eb98d37d619aea78913697c8ba257515b Deleted: sha256:7aa045a1718352a7d299e5f96f8e8c72bf9ec0201c5d209aa2e47a3233123d95 Deleted: sha256:cd451d0eb38067b9ee03079a8af99acd742034186e39c14e7dc2437cc19fc442 Deleted: sha256:6ec3a88c3f6fbeaeb22b5ca1f649415478c8e3aca193febdcce7009a2bc83984 Deleted: sha256:66c7d733ad4324475e0a91ab0f352eb20962c30e34c1ca3c74c3e5be1385c5cc Deleted: sha256:b5b1debbe7db69947ef431779c9860154595bb88668f7dc8c1122d3e59494f33 Deleted: sha256:edb83692970fc2f6fce343db44f9e5656e60d8cdc39f9cfd77eed404dd98a5cd Deleted: sha256:f01f55c21eb19a081deac6396b7d5d70e786eb0b90ef803506ff8e2c6d563e1c Deleted: sha256:9641cbf2320178323563068cf3f87e1d345b1decb07902f0c4c9a64fefffd635 Deleted: sha256:7c1279f5d29c04677d8a931093074d40c98bbb101905f97de90a039ccca2a7ae Untagged: localhost:33294/kubevirt/virt-handler:devel Untagged: localhost:33294/kubevirt/virt-handler@sha256:38d0deea498f8b91c9f672921eabd43878010dd1c4f4cf2e87ff39e513e4586a Deleted: sha256:f883ee1cafcdfc740dce921052e2c0ce4b59950a9a62660f013d2f113bbfffad Deleted: sha256:19987a7ebd2c0836016e780790e9fdd5b8adc7973234356776499043960322bf Deleted: sha256:f616002f574c9b4de509acf20deaab2e00cee269839bf22e10e867ee81e50035 Deleted: sha256:67caad5d99a7887656c24d149bf000290fd42ff0795bab81375ddd9db657c542 Untagged: localhost:33294/kubevirt/virt-api:devel Untagged: localhost:33294/kubevirt/virt-api@sha256:1072bc6d05cabe670b9566acb313001cedc6e1282aab5209309dfb9ce5d03701 Deleted: sha256:7dd210d505ffb2d1ef115fa807d5ebad3ca4cc84c00a827e775322e523ecb021 Deleted: sha256:861b63c1882580197daef76c6873d19b463f8905b8ccce392590f7fc98002e8f Deleted: sha256:4ad38c827bb7d61664677daca8add1d4a76679ea1517a0461f4df525d0bf45f7 Deleted: sha256:ab8eba6fb6ac7fa1da0f031009ceb697e252d0d96e1d267eeacc736db442318f Untagged: localhost:33294/kubevirt/subresource-access-test:devel Untagged: localhost:33294/kubevirt/subresource-access-test@sha256:4c8f0aa8351c46eb0128b0eff185e0f445746420aa86b5009e27e9f5414147a4 Deleted: sha256:1d2112871251eb70275edef96aa0c7c5056b6b98d01c5186b7814c73184436bd Deleted: sha256:681bbf037fd151b729a8b845cee53fb97b7e25ad641dd5ef10bf12a4a827c51b Deleted: sha256:055b1d21df9e417a2933202f714887b1956000664bcf435b25186c4c3a9408f0 Deleted: sha256:6e1acbc7ed6fb0a1b12ddf86de672eedc77a5d7fa939176eb3c10b94135a192a Untagged: localhost:33203/kubevirt/example-hook-sidecar:devel Untagged: localhost:33203/kubevirt/example-hook-sidecar@sha256:4d2d61c9e8892ee28a3eb49fd972b13e5ee375b9ab4b3d00122e970085a5a157 Deleted: sha256:486fdd0addac310de2f95d7b5ba5c76a55cd67853469139e39c7aad942c117a7 Deleted: sha256:9d98d1d71fc04ccf9aa24d45e2673affa101ace31b05efcf3fc5f20fa723bb8a Deleted: sha256:2832cba81b71bd03387b386e1723a8acf8c1c502eccf6e2b1bdeda76fbbcd791 Deleted: sha256:cd511793deb75e837cfcd6bf0f15f0e846fc35ccf0db54398bcd3fdc4a25f159 sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.44 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> b00c84523b53 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> b76b8bd8cd39 Step 5/8 : USER 1001 ---> Using cache ---> b6d9ad9ed232 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> ef42f87034ce Removing intermediate container 75af4ed51547 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 779d8eb7e829 ---> 43a9f1016336 Removing intermediate container 779d8eb7e829 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release2" '' "virt-controller" '' ---> Running in baf21ce0e59e ---> 74c326afce45 Removing intermediate container baf21ce0e59e Successfully built 74c326afce45 Sending build context to Docker daemon 43.38 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 945996802736 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 672f9ab56316 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> d54bb0434f93 Removing intermediate container 5fe3f2312369 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> b21ed87b41df Removing intermediate container 2f8d06ed0136 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 5d46741f8e68  ---> 276e2a8c353d Removing intermediate container 5d46741f8e68 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 27adf914a114  ---> fb2e80e44bbc Removing intermediate container 27adf914a114 Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> c08841094cbf Removing intermediate container c60b8b3735ea Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in 80a5953a0da6 ---> 7ef793c81b95 Removing intermediate container 80a5953a0da6 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release2" '' "virt-launcher" '' ---> Running in 5f9f485cfa12 ---> b97e391dbcfd Removing intermediate container 5f9f485cfa12 Successfully built b97e391dbcfd Sending build context to Docker daemon 39.95 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 283aff11f9d9 Removing intermediate container fa874bbc31e1 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 36812af537e4 ---> b875a5175729 Removing intermediate container 36812af537e4 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release2" '' "virt-handler" '' ---> Running in c6f5c99bb37d ---> 60fbaa2e6a52 Removing intermediate container c6f5c99bb37d Successfully built 60fbaa2e6a52 Sending build context to Docker daemon 38.86 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> ed1ebf600ee1 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 0769dad023e5 Step 5/8 : USER 1001 ---> Using cache ---> 0cb65afb0c2b Step 6/8 : COPY virt-api /usr/bin/virt-api ---> dec4563b080e Removing intermediate container e1c0e603b08e Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in fdc647e212de ---> d8c0bbeacf69 Removing intermediate container fdc647e212de Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release2" '' "virt-api" '' ---> Running in c1d998769d70 ---> 18325e08d48f Removing intermediate container c1d998769d70 Successfully built 18325e08d48f Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/7 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 02134835a6aa Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> ec0843818da7 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 754029bb4bd2 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.11.0-release2" '' ---> Using cache ---> 1220ce6ff0fa Successfully built 1220ce6ff0fa Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/5 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 207487abe7b2 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release2" '' "vm-killer" '' ---> Using cache ---> e2940dd6b38f Successfully built e2940dd6b38f Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 5734d749eb5c Step 3/7 : ENV container docker ---> Using cache ---> f8775a77966f Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 1a40cf222a61 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 77b545d92fe7 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> dfe20d463305 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release2" '' "registry-disk-v1alpha" '' ---> Using cache ---> b90f9ac6e4b9 Successfully built b90f9ac6e4b9 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33681/kubevirt/registry-disk-v1alpha:devel ---> b90f9ac6e4b9 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> bf4321f1bdcf Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> fdb5aa18f4f6 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-release2" '' ---> Using cache ---> a3970deead12 Successfully built a3970deead12 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33681/kubevirt/registry-disk-v1alpha:devel ---> b90f9ac6e4b9 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3fbeaa31b861 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 2f8d65aae622 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-release2" '' ---> Using cache ---> 8e0988b9a102 Successfully built 8e0988b9a102 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33681/kubevirt/registry-disk-v1alpha:devel ---> b90f9ac6e4b9 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3fbeaa31b861 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 61427d5da613 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.11.0-release2" '' ---> Using cache ---> 64c1d6221966 Successfully built 64c1d6221966 Sending build context to Docker daemon 35.65 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 985fe391c056 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 3b2cae8ac543 Step 5/8 : USER 1001 ---> Using cache ---> 0c06e5b4a900 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 72edde4e4d60 Removing intermediate container d3671e6826de Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 563e8cffc468 ---> ea9e2a0b66ff Removing intermediate container 563e8cffc468 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release2" '' "subresource-access-test" '' ---> Running in 5fdd635011de ---> c29a279d3de8 Removing intermediate container 5fdd635011de Successfully built c29a279d3de8 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> bfe77d5699ed Step 3/9 : ENV container docker ---> Using cache ---> 62847a2a1fa8 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> d3456b1644b1 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 0ba81fddbba1 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 5d33abe3f819 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 783826523be1 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 711bc8d15952 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.11.0-release2" '' "winrmcli" '' ---> Using cache ---> b4a60e1f700b Successfully built b4a60e1f700b Sending build context to Docker daemon 36.85 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> e3238544ad97 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 519bba702293 Removing intermediate container b5fb10e296ce Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in c3ff65588869 ---> 2e486e61d716 Removing intermediate container c3ff65588869 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.11.0-release2" '' ---> Running in f5b79bfe69ae ---> 1a0ca3e7ca11 Removing intermediate container f5b79bfe69ae Successfully built 1a0ca3e7ca11 hack/build-docker.sh push The push refers to a repository [localhost:33681/kubevirt/virt-controller] fac649934811: Preparing aa89340cf7a8: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Waiting aa89340cf7a8: Pushed fac649934811: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:7078591b8ffd5cbf3d6683df7026f52da4b5d50a0bfc4bbb58792530f8473ad7 size: 949 The push refers to a repository [localhost:33681/kubevirt/virt-launcher] aec9e52fc848: Preparing 10887b8b0b6a: Preparing 5e80362d6567: Preparing 6b69299dee88: Preparing 11e4a7cb1550: Preparing 633427c64a24: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing 10887b8b0b6a: Waiting 5e80362d6567: Waiting fa6154170bf5: Preparing 5eefb9960a36: Preparing da38cf808aa5: Waiting aec9e52fc848: Waiting 891e1e4ef82a: Preparing b83399358a92: Waiting 6b69299dee88: Waiting 11e4a7cb1550: Waiting 186d8b3e4fd8: Waiting fa6154170bf5: Waiting 891e1e4ef82a: Waiting 5eefb9960a36: Waiting 633427c64a24: Waiting aec9e52fc848: Pushed 10887b8b0b6a: Pushed 6b69299dee88: Pushed 5e80362d6567: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 633427c64a24: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 11e4a7cb1550: Pushed 5eefb9960a36: Pushed devel: digest: sha256:a54ccea419b2892b9a407811e11e9661001b45518e5f151186c934f5f31e5227 size: 2828 The push refers to a repository [localhost:33681/kubevirt/virt-handler] 2bd9027d07f0: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 2bd9027d07f0: Pushed devel: digest: sha256:2dcd08f11982b3a07572603ab8072cbc01a98addea7d36356c2c6de13a8885a3 size: 741 The push refers to a repository [localhost:33681/kubevirt/virt-api] 446a8a2b8638: Preparing 82fc744c99b4: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 82fc744c99b4: Pushed 446a8a2b8638: Pushed devel: digest: sha256:3aa9bf1f7d13a16b031812e4a0b3c9cedfe088844d41bfca59cc6983bf306915 size: 948 The push refers to a repository [localhost:33681/kubevirt/disks-images-provider] 71ad31feb2c5: Preparing 21d4b721776e: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 71ad31feb2c5: Pushed 21d4b721776e: Pushed devel: digest: sha256:8248c33d4f2cd30ad33251df9173b3ecad245afebd777a5171ab2e204d28df4a size: 948 The push refers to a repository [localhost:33681/kubevirt/vm-killer] c4cfadeeaf5f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider c4cfadeeaf5f: Pushed devel: digest: sha256:47714e82b2e2d1b6dc3e1e584d4a04373fb18b38d97dac6b3a7d35ec336a7166 size: 740 The push refers to a repository [localhost:33681/kubevirt/registry-disk-v1alpha] 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 661cce8d8e52: Pushed 41e0baba3077: Pushed 25edbec0eaea: Pushed devel: digest: sha256:b7e540ff190967aaaa59b6d29709634fc580702f074373817a5746502655f2d2 size: 948 The push refers to a repository [localhost:33681/kubevirt/cirros-registry-disk-demo] 861539d118fb: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 41e0baba3077: Mounted from kubevirt/registry-disk-v1alpha 661cce8d8e52: Mounted from kubevirt/registry-disk-v1alpha 861539d118fb: Pushed devel: digest: sha256:2bc0ec7ac7d5b07023e89869d99234aa30109772137ec2bd538ee08ef1b22c4e size: 1160 The push refers to a repository [localhost:33681/kubevirt/fedora-cloud-registry-disk-demo] 3c128f86e56a: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo 41e0baba3077: Mounted from kubevirt/cirros-registry-disk-demo 661cce8d8e52: Mounted from kubevirt/cirros-registry-disk-demo 3c128f86e56a: Pushed devel: digest: sha256:8f6b51e1dbe7c16a62004d8889773ad4fb893166257d7435ee5e70676642297e size: 1161 The push refers to a repository [localhost:33681/kubevirt/alpine-registry-disk-demo] 9a9e79d66e6a: Preparing 661cce8d8e52: Preparing 41e0baba3077: Preparing 25edbec0eaea: Preparing 661cce8d8e52: Mounted from kubevirt/fedora-cloud-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo 41e0baba3077: Mounted from kubevirt/fedora-cloud-registry-disk-demo 9a9e79d66e6a: Pushed devel: digest: sha256:380b93b3e6cf2189585f4f3ff9823125aa6af7d4218da5544444489de4c87fd9 size: 1160 The push refers to a repository [localhost:33681/kubevirt/subresource-access-test] 03f54a61724f: Preparing 25cb73590a9d: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 25cb73590a9d: Pushed 03f54a61724f: Pushed devel: digest: sha256:cfc49d5f475c96e74706b0cfda4cf8bd57bb69d7bb9111aeab4fe28f9d49b912 size: 948 The push refers to a repository [localhost:33681/kubevirt/winrmcli] f8083e002d0b: Preparing 53c709abc882: Preparing 9ca98a0f492b: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Waiting f8083e002d0b: Pushed 891e1e4ef82a: Mounted from kubevirt/subresource-access-test 9ca98a0f492b: Pushed 53c709abc882: Pushed devel: digest: sha256:2bb0f2a7c6a6c084c1e57bd409bf447d7542882fdcc434f452f3d919561dd272 size: 1165 The push refers to a repository [localhost:33681/kubevirt/example-hook-sidecar] b407cbb19352: Preparing 39bae602f753: Preparing b407cbb19352: Pushed 39bae602f753: Pushed devel: digest: sha256:3bcc583e429c5dce722b33158852b4aa3befce35edf8367206d2ea94d1de4e1a size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.11.0-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.11.0-release2 ++ job_prefix=kubevirt-functional-tests-k8s-1.11.0-release2 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-185-g69d7e33 ++ KUBEVIRT_VERSION=v0.7.0-185-g69d7e33 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33681/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ wc -l ++ cluster/k8s-1.11.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.11.0-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.11.0-release2 ++ job_prefix=kubevirt-functional-tests-k8s-1.11.0-release2 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-185-g69d7e33 ++ KUBEVIRT_VERSION=v0.7.0-185-g69d7e33 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:6c1caf5559eb02a144bf606de37eb0194c06ace4d77ad4561459f3bde876151c ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33681/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.11.0-release ]] + [[ k8s-1.11.0-release =~ .*-dev ]] + [[ k8s-1.11.0-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io/kubevirt.io:admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:edit created clusterrole.rbac.authorization.k8s.io/kubevirt.io:view created serviceaccount/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver-auth-delegator created rolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created role.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-controller created serviceaccount/kubevirt-controller created serviceaccount/kubevirt-privileged created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller-cluster-admin created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-privileged-cluster-admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:default created clusterrolebinding.rbac.authorization.k8s.io/kubevirt.io:default created service/virt-api created deployment.extensions/virt-api created deployment.extensions/virt-controller created daemonset.extensions/virt-handler created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstances.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancereplicasets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancepresets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachines.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/kubevirtconfigs.kubevirt.io created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim/disk-alpine created persistentvolume/host-path-disk-alpine created persistentvolumeclaim/disk-custom created persistentvolume/host-path-disk-custom created daemonset.extensions/disks-images-provider created serviceaccount/kubevirt-testing created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-testing-cluster-admin created + [[ k8s-1.11.0 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-bcc6b587d-9pbmx 0/1 ContainerCreating 0 4s virt-api-bcc6b587d-z6t6r 0/1 ContainerCreating 0 4s virt-controller-67dcdd8464-68pjr 0/1 ContainerCreating 0 4s virt-controller-67dcdd8464-vhg8t 0/1 ContainerCreating 0 4s virt-handler-hjk99 0/1 ContainerCreating 0 4s virt-handler-ncdgb 0/1 ContainerCreating 0 4s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running virt-api-bcc6b587d-9pbmx 0/1 ContainerCreating 0 5s virt-api-bcc6b587d-z6t6r 0/1 ContainerCreating 0 5s virt-controller-67dcdd8464-68pjr 0/1 ContainerCreating 0 5s virt-controller-67dcdd8464-vhg8t 0/1 ContainerCreating 0 5s virt-handler-hjk99 0/1 ContainerCreating 0 5s virt-handler-ncdgb 0/1 ContainerCreating 0 5s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n 'false false false' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers false false false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-7fqfx 1/1 Running 0 17m coredns-78fcdf6894-7vpv7 1/1 Running 0 17m disks-images-provider-5pdcr 1/1 Running 0 1m disks-images-provider-8bqjc 1/1 Running 0 1m etcd-node01 1/1 Running 0 17m kube-apiserver-node01 1/1 Running 0 17m kube-controller-manager-node01 1/1 Running 0 17m kube-flannel-ds-qkdcp 1/1 Running 0 17m kube-flannel-ds-s9kx4 1/1 Running 0 17m kube-proxy-7xgch 1/1 Running 0 17m kube-proxy-m9glf 1/1 Running 0 17m kube-scheduler-node01 1/1 Running 0 17m virt-api-bcc6b587d-9pbmx 1/1 Running 0 1m virt-api-bcc6b587d-z6t6r 1/1 Running 0 1m virt-controller-67dcdd8464-68pjr 1/1 Running 0 1m virt-controller-67dcdd8464-vhg8t 1/1 Running 0 1m virt-handler-hjk99 1/1 Running 0 1m virt-handler-ncdgb 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n default --no-headers + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default NAME READY STATUS RESTARTS AGE local-volume-provisioner-7zpft 1/1 Running 0 17m local-volume-provisioner-hf4b4 1/1 Running 0 17m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/junit.xml' + [[ k8s-1.11.0-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.11.0-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:dcf2b21fa2ed11dcf9dbba21b1cca0ee3fad521a0e9aee61c06d0b0b66a4b200 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1533557196 Will run 148 of 148 specs •2018/08/06 08:08:15 read closing down: EOF 2018/08/06 08:08:25 read closing down: EOF 2018/08/06 08:08:36 read closing down: EOF 2018/08/06 08:08:46 read closing down: EOF 2018/08/06 08:08:47 read closing down: EOF 2018/08/06 08:08:49 read closing down: EOF 2018/08/06 08:08:50 read closing down: EOF 2018/08/06 08:08:50 read closing down: EOF ------------------------------ • [SLOW TEST:132.512 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/08/06 08:08:52 read closing down: EOF •2018/08/06 08:08:52 read closing down: EOF 2018/08/06 08:08:52 read closing down: EOF 2018/08/06 08:08:54 read closing down: EOF 2018/08/06 08:08:55 read closing down: EOF •2018/08/06 08:08:55 read closing down: EOF 2018/08/06 08:08:57 read closing down: EOF 2018/08/06 08:08:57 read closing down: EOF •2018/08/06 08:08:57 read closing down: EOF ------------------------------ • [SLOW TEST:5.272 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.237 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •••••2018/08/06 08:12:51 read closing down: EOF Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:10:25 http: TLS handshake error from 10.244.0.1:55618: EOF 2018/08/06 12:10:35 http: TLS handshake error from 10.244.0.1:55678: EOF 2018/08/06 12:10:45 http: TLS handshake error from 10.244.0.1:55738: EOF 2018/08/06 12:10:55 http: TLS handshake error from 10.244.0.1:55798: EOF 2018/08/06 12:11:05 http: TLS handshake error from 10.244.0.1:55858: EOF 2018/08/06 12:11:15 http: TLS handshake error from 10.244.0.1:55918: EOF 2018/08/06 12:11:25 http: TLS handshake error from 10.244.0.1:55978: EOF 2018/08/06 12:11:35 http: TLS handshake error from 10.244.0.1:56038: EOF 2018/08/06 12:11:45 http: TLS handshake error from 10.244.0.1:56098: EOF 2018/08/06 12:11:55 http: TLS handshake error from 10.244.0.1:56158: EOF 2018/08/06 12:12:05 http: TLS handshake error from 10.244.0.1:56218: EOF 2018/08/06 12:12:15 http: TLS handshake error from 10.244.0.1:56278: EOF 2018/08/06 12:12:25 http: TLS handshake error from 10.244.0.1:56338: EOF 2018/08/06 12:12:35 http: TLS handshake error from 10.244.0.1:56398: EOF 2018/08/06 12:12:45 http: TLS handshake error from 10.244.0.1:56458: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:10:42 http: TLS handshake error from 10.244.1.1:42980: EOF 2018/08/06 12:10:52 http: TLS handshake error from 10.244.1.1:42986: EOF 2018/08/06 12:11:02 http: TLS handshake error from 10.244.1.1:42992: EOF 2018/08/06 12:11:12 http: TLS handshake error from 10.244.1.1:42998: EOF 2018/08/06 12:11:22 http: TLS handshake error from 10.244.1.1:43004: EOF 2018/08/06 12:11:32 http: TLS handshake error from 10.244.1.1:43010: EOF 2018/08/06 12:11:42 http: TLS handshake error from 10.244.1.1:43016: EOF 2018/08/06 12:11:52 http: TLS handshake error from 10.244.1.1:43022: EOF 2018/08/06 12:12:02 http: TLS handshake error from 10.244.1.1:43028: EOF 2018/08/06 12:12:12 http: TLS handshake error from 10.244.1.1:43034: EOF 2018/08/06 12:12:22 http: TLS handshake error from 10.244.1.1:43040: EOF 2018/08/06 12:12:32 http: TLS handshake error from 10.244.1.1:43046: EOF 2018/08/06 12:12:42 http: TLS handshake error from 10.244.1.1:43052: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:10:33.525720Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.525761Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.525805Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543741Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543926Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: true\n" level=info timestamp=2018-08-06T12:10:33.543978Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.544040Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.544254Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.544366Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.544976Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.545065Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: true\n" level=info timestamp=2018-08-06T12:10:33.545167Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.545229Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.545317Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.545414Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat4brxw Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' Hello World! succeeded + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatflp84 Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 Hello World! + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 succeeded Pod name: netcatg67nr Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatkbwn2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatpj8r2 Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcatz66xg Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatzhvj2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi8c7gk-smpxk Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.096622Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:13.765963Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:13.849183Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:13.957717Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 1789e006-c3de-4ff2-a983-0d44160adcc7" level=info timestamp=2018-08-06T12:07:13.958058Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.108257Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.208185Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.229329Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.229559Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.281665Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.284938Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.286079Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.289098Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.978291Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.980571Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1789e006-c3de-4ff2-a983-0d44160adcc7: 225" Pod name: virt-launcher-testvmibhxqp-kdmmb Pod phase: Failed level=info timestamp=2018-08-06T12:07:39.045392Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:39.078047Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID df1800ae-57ef-462a-abb4-2a9aa34a9802" level=info timestamp=2018-08-06T12:07:39.078415Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:39.116518Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:39.855452Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:39.891003Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:40.046377Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.056441Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.077039Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.077365Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:40.202615Z pos=monitor.go:222 component=virt-launcher msg="Found PID for df1800ae-57ef-462a-abb4-2a9aa34a9802: 299" level=info timestamp=2018-08-06T12:07:40.240341Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.242081Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.244753Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.256140Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmim4bp4-rnb9w Pod phase: Failed level=info timestamp=2018-08-06T12:07:14.638019Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:16.016726Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:16.026885Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 5cbe4f6e-2b88-4d64-9327-2e3485920200" level=info timestamp=2018-08-06T12:07:16.027107Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:16.037599Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.045300Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.058420Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 5cbe4f6e-2b88-4d64-9327-2e3485920200: 236" level=info timestamp=2018-08-06T12:07:17.135503Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.150184Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.186169Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.257797Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:17.276045Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:17.277200Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.283082Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.366301Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmipmjsf-xxdp5 Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.018343Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.034855Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:14.064397Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.074077Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 281dac1d-d12d-4960-9cb6-0199e526dca0" level=info timestamp=2018-08-06T12:07:14.074477Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.704004Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.789191Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.790221Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.800440Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.800709Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.811729Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.867479Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.880743Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:15.038323Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:15.096857Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 281dac1d-d12d-4960-9cb6-0199e526dca0: 226" Pod name: virt-launcher-testvmiz64vc-lfhh2 Pod phase: Running level=info timestamp=2018-08-06T12:11:35.768002Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-08-06T12:11:35.768144Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-08-06T12:11:35.823457Z pos=virt-launcher.go:113 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmiz64vc" level=info timestamp=2018-08-06T12:11:35.823888Z pos=virt-launcher.go:59 component=virt-launcher msg="Marked as ready" ------------------------------ • Failure [208.368 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:368 should expose the right device type to the guest [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:369 Expected error: : 180000000000 expect: timer expired after 180 seconds not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:376 ------------------------------ STEP: checking the device vendor in /sys/class level=info timestamp=2018-08-06T12:09:26.128059Z pos=utils.go:244 component=tests namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Created virtual machine pod virt-launcher-testvmiz64vc-lfhh2" level=info timestamp=2018-08-06T12:09:39.896270Z pos=utils.go:244 component=tests namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmiz64vc-lfhh2" level=info timestamp=2018-08-06T12:09:51.530005Z pos=utils.go:244 component=tests namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-08-06T12:09:51.577359Z pos=utils.go:244 component=tests namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="VirtualMachineInstance started." level=info timestamp=2018-08-06T12:12:51.908325Z pos=utils.go:1250 component=tests namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Login: [{2 \r\n\r\n\r\nISOLINUX 6.04 6.04-pre1 Copyright (C) 1994-2015 H. Peter Anvin et al\r\nboot: \u001b[?7h\r\n []}]" Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:10:25 http: TLS handshake error from 10.244.0.1:55618: EOF 2018/08/06 12:10:35 http: TLS handshake error from 10.244.0.1:55678: EOF 2018/08/06 12:10:45 http: TLS handshake error from 10.244.0.1:55738: EOF 2018/08/06 12:10:55 http: TLS handshake error from 10.244.0.1:55798: EOF 2018/08/06 12:11:05 http: TLS handshake error from 10.244.0.1:55858: EOF 2018/08/06 12:11:15 http: TLS handshake error from 10.244.0.1:55918: EOF 2018/08/06 12:11:25 http: TLS handshake error from 10.244.0.1:55978: EOF 2018/08/06 12:11:35 http: TLS handshake error from 10.244.0.1:56038: EOF 2018/08/06 12:11:45 http: TLS handshake error from 10.244.0.1:56098: EOF 2018/08/06 12:11:55 http: TLS handshake error from 10.244.0.1:56158: EOF 2018/08/06 12:12:05 http: TLS handshake error from 10.244.0.1:56218: EOF 2018/08/06 12:12:15 http: TLS handshake error from 10.244.0.1:56278: EOF 2018/08/06 12:12:25 http: TLS handshake error from 10.244.0.1:56338: EOF 2018/08/06 12:12:35 http: TLS handshake error from 10.244.0.1:56398: EOF 2018/08/06 12:12:45 http: TLS handshake error from 10.244.0.1:56458: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:10:42 http: TLS handshake error from 10.244.1.1:42980: EOF 2018/08/06 12:10:52 http: TLS handshake error from 10.244.1.1:42986: EOF 2018/08/06 12:11:02 http: TLS handshake error from 10.244.1.1:42992: EOF 2018/08/06 12:11:12 http: TLS handshake error from 10.244.1.1:42998: EOF 2018/08/06 12:11:22 http: TLS handshake error from 10.244.1.1:43004: EOF 2018/08/06 12:11:32 http: TLS handshake error from 10.244.1.1:43010: EOF 2018/08/06 12:11:42 http: TLS handshake error from 10.244.1.1:43016: EOF 2018/08/06 12:11:52 http: TLS handshake error from 10.244.1.1:43022: EOF 2018/08/06 12:12:02 http: TLS handshake error from 10.244.1.1:43028: EOF 2018/08/06 12:12:12 http: TLS handshake error from 10.244.1.1:43034: EOF 2018/08/06 12:12:22 http: TLS handshake error from 10.244.1.1:43040: EOF 2018/08/06 12:12:32 http: TLS handshake error from 10.244.1.1:43046: EOF 2018/08/06 12:12:42 http: TLS handshake error from 10.244.1.1:43052: EOF 2018/08/06 12:12:52 http: TLS handshake error from 10.244.1.1:43058: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:10:33.525720Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.525761Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.525805Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543741Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543926Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: true\n" level=info timestamp=2018-08-06T12:10:33.543978Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.544040Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.544254Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.544366Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.544976Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.545065Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: true\n" level=info timestamp=2018-08-06T12:10:33.545167Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.545229Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.545317Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.545414Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat4brxw Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' Hello World! succeeded + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatflp84 Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 Hello World! + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 succeeded Pod name: netcatg67nr Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatkbwn2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatpj8r2 Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcatz66xg Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatzhvj2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi8c7gk-smpxk Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.096622Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:13.765963Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:13.849183Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:13.957717Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 1789e006-c3de-4ff2-a983-0d44160adcc7" level=info timestamp=2018-08-06T12:07:13.958058Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.108257Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.208185Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.229329Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.229559Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.281665Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.284938Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.286079Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.289098Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.978291Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.980571Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1789e006-c3de-4ff2-a983-0d44160adcc7: 225" Pod name: virt-launcher-testvmibhxqp-kdmmb Pod phase: Failed level=info timestamp=2018-08-06T12:07:39.045392Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:39.078047Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID df1800ae-57ef-462a-abb4-2a9aa34a9802" level=info timestamp=2018-08-06T12:07:39.078415Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:39.116518Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:39.855452Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:39.891003Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:40.046377Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.056441Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.077039Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.077365Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:40.202615Z pos=monitor.go:222 component=virt-launcher msg="Found PID for df1800ae-57ef-462a-abb4-2a9aa34a9802: 299" level=info timestamp=2018-08-06T12:07:40.240341Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.242081Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.244753Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.256140Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmim4bp4-rnb9w Pod phase: Failed level=info timestamp=2018-08-06T12:07:14.638019Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:16.016726Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:16.026885Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 5cbe4f6e-2b88-4d64-9327-2e3485920200" level=info timestamp=2018-08-06T12:07:16.027107Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:16.037599Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.045300Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.058420Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 5cbe4f6e-2b88-4d64-9327-2e3485920200: 236" level=info timestamp=2018-08-06T12:07:17.135503Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.150184Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.186169Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.257797Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:17.276045Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:17.277200Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.283082Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.366301Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmipmjsf-xxdp5 Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.018343Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.034855Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:14.064397Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.074077Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 281dac1d-d12d-4960-9cb6-0199e526dca0" level=info timestamp=2018-08-06T12:07:14.074477Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.704004Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.789191Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.790221Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.800440Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.800709Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.811729Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.867479Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.880743Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:15.038323Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:15.096857Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 281dac1d-d12d-4960-9cb6-0199e526dca0: 226" Pod name: virt-launcher-testvmiz64vc-lfhh2 Pod phase: Running level=info timestamp=2018-08-06T12:11:35.768002Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-08-06T12:11:35.768144Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-08-06T12:11:35.823457Z pos=virt-launcher.go:113 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmiz64vc" level=info timestamp=2018-08-06T12:11:35.823888Z pos=virt-launcher.go:59 component=virt-launcher msg="Marked as ready" • Failure [2.524 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with default interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:382 should expose the right device type to the guest [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:383 Expected error: <*kubecli.AsyncSubresourceError | 0xc420579820>: { err: "Can't connect to websocket (503): service unavailable\n\n", StatusCode: 503, } Can't connect to websocket (503): service unavailable not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:365 ------------------------------ STEP: checking the device vendor in /sys/class Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:11:05 http: TLS handshake error from 10.244.0.1:55858: EOF 2018/08/06 12:11:15 http: TLS handshake error from 10.244.0.1:55918: EOF 2018/08/06 12:11:25 http: TLS handshake error from 10.244.0.1:55978: EOF 2018/08/06 12:11:35 http: TLS handshake error from 10.244.0.1:56038: EOF 2018/08/06 12:11:45 http: TLS handshake error from 10.244.0.1:56098: EOF 2018/08/06 12:11:55 http: TLS handshake error from 10.244.0.1:56158: EOF 2018/08/06 12:12:05 http: TLS handshake error from 10.244.0.1:56218: EOF 2018/08/06 12:12:15 http: TLS handshake error from 10.244.0.1:56278: EOF 2018/08/06 12:12:25 http: TLS handshake error from 10.244.0.1:56338: EOF 2018/08/06 12:12:35 http: TLS handshake error from 10.244.0.1:56398: EOF 2018/08/06 12:12:45 http: TLS handshake error from 10.244.0.1:56458: EOF 2018/08/06 12:12:55 http: TLS handshake error from 10.244.0.1:56520: EOF 2018/08/06 12:13:05 http: TLS handshake error from 10.244.0.1:56580: EOF 2018/08/06 12:13:15 http: TLS handshake error from 10.244.0.1:56640: EOF 2018/08/06 12:13:25 http: TLS handshake error from 10.244.0.1:56700: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:11:02 http: TLS handshake error from 10.244.1.1:42992: EOF 2018/08/06 12:11:12 http: TLS handshake error from 10.244.1.1:42998: EOF 2018/08/06 12:11:22 http: TLS handshake error from 10.244.1.1:43004: EOF 2018/08/06 12:11:32 http: TLS handshake error from 10.244.1.1:43010: EOF 2018/08/06 12:11:42 http: TLS handshake error from 10.244.1.1:43016: EOF 2018/08/06 12:11:52 http: TLS handshake error from 10.244.1.1:43022: EOF 2018/08/06 12:12:02 http: TLS handshake error from 10.244.1.1:43028: EOF 2018/08/06 12:12:12 http: TLS handshake error from 10.244.1.1:43034: EOF 2018/08/06 12:12:22 http: TLS handshake error from 10.244.1.1:43040: EOF 2018/08/06 12:12:32 http: TLS handshake error from 10.244.1.1:43046: EOF 2018/08/06 12:12:42 http: TLS handshake error from 10.244.1.1:43052: EOF 2018/08/06 12:12:52 http: TLS handshake error from 10.244.1.1:43058: EOF 2018/08/06 12:13:02 http: TLS handshake error from 10.244.1.1:43064: EOF 2018/08/06 12:13:12 http: TLS handshake error from 10.244.1.1:43070: EOF 2018/08/06 12:13:22 http: TLS handshake error from 10.244.1.1:43076: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:10:33.525720Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.525761Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.525805Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543741Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543926Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: true\n" level=info timestamp=2018-08-06T12:10:33.543978Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.544040Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.544254Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.544366Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.544976Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.545065Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: true\n" level=info timestamp=2018-08-06T12:10:33.545167Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.545229Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.545317Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.545414Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat4brxw Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' Hello World! succeeded + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatflp84 Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 Hello World! + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 succeeded Pod name: netcatg67nr Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatkbwn2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatpj8r2 Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcatz66xg Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatzhvj2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi8c7gk-smpxk Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.096622Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:13.765963Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:13.849183Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:13.957717Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 1789e006-c3de-4ff2-a983-0d44160adcc7" level=info timestamp=2018-08-06T12:07:13.958058Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.108257Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.208185Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.229329Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.229559Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.281665Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.284938Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.286079Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.289098Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.978291Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.980571Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1789e006-c3de-4ff2-a983-0d44160adcc7: 225" Pod name: virt-launcher-testvmibhxqp-kdmmb Pod phase: Failed level=info timestamp=2018-08-06T12:07:39.045392Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:39.078047Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID df1800ae-57ef-462a-abb4-2a9aa34a9802" level=info timestamp=2018-08-06T12:07:39.078415Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:39.116518Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:39.855452Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:39.891003Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:40.046377Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.056441Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.077039Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.077365Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:40.202615Z pos=monitor.go:222 component=virt-launcher msg="Found PID for df1800ae-57ef-462a-abb4-2a9aa34a9802: 299" level=info timestamp=2018-08-06T12:07:40.240341Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.242081Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.244753Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.256140Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmim4bp4-rnb9w Pod phase: Failed level=info timestamp=2018-08-06T12:07:14.638019Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:16.016726Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:16.026885Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 5cbe4f6e-2b88-4d64-9327-2e3485920200" level=info timestamp=2018-08-06T12:07:16.027107Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:16.037599Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.045300Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.058420Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 5cbe4f6e-2b88-4d64-9327-2e3485920200: 236" level=info timestamp=2018-08-06T12:07:17.135503Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.150184Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.186169Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.257797Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:17.276045Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:17.277200Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.283082Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.366301Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmipmjsf-xxdp5 Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.018343Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.034855Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:14.064397Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.074077Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 281dac1d-d12d-4960-9cb6-0199e526dca0" level=info timestamp=2018-08-06T12:07:14.074477Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.704004Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.789191Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.790221Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.800440Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.800709Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.811729Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.867479Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.880743Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:15.038323Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:15.096857Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 281dac1d-d12d-4960-9cb6-0199e526dca0: 226" Pod name: virt-launcher-testvmiz64vc-lfhh2 Pod phase: Running level=info timestamp=2018-08-06T12:11:35.768002Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-08-06T12:11:35.768144Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-08-06T12:11:35.823457Z pos=virt-launcher.go:113 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmiz64vc" level=info timestamp=2018-08-06T12:11:35.823888Z pos=virt-launcher.go:59 component=virt-launcher msg="Marked as ready" • Failure [32.452 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:402 should configure custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:403 Expected error: <*errors.StatusError | 0xc420b501b0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:407 ------------------------------ STEP: checking eth0 MAC address Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:11:35 http: TLS handshake error from 10.244.0.1:56038: EOF 2018/08/06 12:11:45 http: TLS handshake error from 10.244.0.1:56098: EOF 2018/08/06 12:11:55 http: TLS handshake error from 10.244.0.1:56158: EOF 2018/08/06 12:12:05 http: TLS handshake error from 10.244.0.1:56218: EOF 2018/08/06 12:12:15 http: TLS handshake error from 10.244.0.1:56278: EOF 2018/08/06 12:12:25 http: TLS handshake error from 10.244.0.1:56338: EOF 2018/08/06 12:12:35 http: TLS handshake error from 10.244.0.1:56398: EOF 2018/08/06 12:12:45 http: TLS handshake error from 10.244.0.1:56458: EOF 2018/08/06 12:12:55 http: TLS handshake error from 10.244.0.1:56520: EOF 2018/08/06 12:13:05 http: TLS handshake error from 10.244.0.1:56580: EOF 2018/08/06 12:13:15 http: TLS handshake error from 10.244.0.1:56640: EOF 2018/08/06 12:13:25 http: TLS handshake error from 10.244.0.1:56700: EOF 2018/08/06 12:13:35 http: TLS handshake error from 10.244.0.1:56760: EOF 2018/08/06 12:13:45 http: TLS handshake error from 10.244.0.1:56820: EOF 2018/08/06 12:13:55 http: TLS handshake error from 10.244.0.1:56880: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:11:32 http: TLS handshake error from 10.244.1.1:43010: EOF 2018/08/06 12:11:42 http: TLS handshake error from 10.244.1.1:43016: EOF 2018/08/06 12:11:52 http: TLS handshake error from 10.244.1.1:43022: EOF 2018/08/06 12:12:02 http: TLS handshake error from 10.244.1.1:43028: EOF 2018/08/06 12:12:12 http: TLS handshake error from 10.244.1.1:43034: EOF 2018/08/06 12:12:22 http: TLS handshake error from 10.244.1.1:43040: EOF 2018/08/06 12:12:32 http: TLS handshake error from 10.244.1.1:43046: EOF 2018/08/06 12:12:42 http: TLS handshake error from 10.244.1.1:43052: EOF 2018/08/06 12:12:52 http: TLS handshake error from 10.244.1.1:43058: EOF 2018/08/06 12:13:02 http: TLS handshake error from 10.244.1.1:43064: EOF 2018/08/06 12:13:12 http: TLS handshake error from 10.244.1.1:43070: EOF 2018/08/06 12:13:22 http: TLS handshake error from 10.244.1.1:43076: EOF 2018/08/06 12:13:32 http: TLS handshake error from 10.244.1.1:43082: EOF 2018/08/06 12:13:42 http: TLS handshake error from 10.244.1.1:43088: EOF 2018/08/06 12:13:52 http: TLS handshake error from 10.244.1.1:43094: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:10:33.525720Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.525761Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.525805Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543741Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543926Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: true\n" level=info timestamp=2018-08-06T12:10:33.543978Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.544040Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.544254Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.544366Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.544976Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.545065Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: true\n" level=info timestamp=2018-08-06T12:10:33.545167Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.545229Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.545317Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.545414Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat4brxw Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' Hello World! succeeded + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatflp84 Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 Hello World! + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 succeeded Pod name: netcatg67nr Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatkbwn2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatpj8r2 Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcatz66xg Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatzhvj2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi8c7gk-smpxk Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.096622Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:13.765963Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:13.849183Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:13.957717Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 1789e006-c3de-4ff2-a983-0d44160adcc7" level=info timestamp=2018-08-06T12:07:13.958058Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.108257Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.208185Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.229329Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.229559Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.281665Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.284938Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.286079Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.289098Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.978291Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.980571Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1789e006-c3de-4ff2-a983-0d44160adcc7: 225" Pod name: virt-launcher-testvmibhxqp-kdmmb Pod phase: Failed level=info timestamp=2018-08-06T12:07:39.045392Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:39.078047Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID df1800ae-57ef-462a-abb4-2a9aa34a9802" level=info timestamp=2018-08-06T12:07:39.078415Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:39.116518Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:39.855452Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:39.891003Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:40.046377Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.056441Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.077039Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.077365Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:40.202615Z pos=monitor.go:222 component=virt-launcher msg="Found PID for df1800ae-57ef-462a-abb4-2a9aa34a9802: 299" level=info timestamp=2018-08-06T12:07:40.240341Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.242081Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.244753Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.256140Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmim4bp4-rnb9w Pod phase: Failed level=info timestamp=2018-08-06T12:07:14.638019Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:16.016726Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:16.026885Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 5cbe4f6e-2b88-4d64-9327-2e3485920200" level=info timestamp=2018-08-06T12:07:16.027107Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:16.037599Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.045300Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.058420Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 5cbe4f6e-2b88-4d64-9327-2e3485920200: 236" level=info timestamp=2018-08-06T12:07:17.135503Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.150184Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.186169Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.257797Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:17.276045Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:17.277200Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.283082Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.366301Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmipmjsf-xxdp5 Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.018343Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.034855Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:14.064397Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.074077Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 281dac1d-d12d-4960-9cb6-0199e526dca0" level=info timestamp=2018-08-06T12:07:14.074477Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.704004Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.789191Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.790221Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.800440Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.800709Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.811729Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.867479Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.880743Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:15.038323Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:15.096857Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 281dac1d-d12d-4960-9cb6-0199e526dca0: 226" Pod name: virt-launcher-testvmiz64vc-lfhh2 Pod phase: Running level=info timestamp=2018-08-06T12:11:35.768002Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-08-06T12:11:35.768144Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-08-06T12:11:35.823457Z pos=virt-launcher.go:113 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmiz64vc" level=info timestamp=2018-08-06T12:11:35.823888Z pos=virt-launcher.go:59 component=virt-launcher msg="Marked as ready" • Failure [32.448 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address in non-conventional format /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:414 should configure custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:415 Expected error: <*errors.StatusError | 0xc420153710>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:420 ------------------------------ STEP: checking eth0 MAC address Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:12:05 http: TLS handshake error from 10.244.0.1:56218: EOF 2018/08/06 12:12:15 http: TLS handshake error from 10.244.0.1:56278: EOF 2018/08/06 12:12:25 http: TLS handshake error from 10.244.0.1:56338: EOF 2018/08/06 12:12:35 http: TLS handshake error from 10.244.0.1:56398: EOF 2018/08/06 12:12:45 http: TLS handshake error from 10.244.0.1:56458: EOF 2018/08/06 12:12:55 http: TLS handshake error from 10.244.0.1:56520: EOF 2018/08/06 12:13:05 http: TLS handshake error from 10.244.0.1:56580: EOF 2018/08/06 12:13:15 http: TLS handshake error from 10.244.0.1:56640: EOF 2018/08/06 12:13:25 http: TLS handshake error from 10.244.0.1:56700: EOF 2018/08/06 12:13:35 http: TLS handshake error from 10.244.0.1:56760: EOF 2018/08/06 12:13:45 http: TLS handshake error from 10.244.0.1:56820: EOF 2018/08/06 12:13:55 http: TLS handshake error from 10.244.0.1:56880: EOF 2018/08/06 12:14:05 http: TLS handshake error from 10.244.0.1:56940: EOF 2018/08/06 12:14:15 http: TLS handshake error from 10.244.0.1:57000: EOF 2018/08/06 12:14:25 http: TLS handshake error from 10.244.0.1:57060: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:12:02 http: TLS handshake error from 10.244.1.1:43028: EOF 2018/08/06 12:12:12 http: TLS handshake error from 10.244.1.1:43034: EOF 2018/08/06 12:12:22 http: TLS handshake error from 10.244.1.1:43040: EOF 2018/08/06 12:12:32 http: TLS handshake error from 10.244.1.1:43046: EOF 2018/08/06 12:12:42 http: TLS handshake error from 10.244.1.1:43052: EOF 2018/08/06 12:12:52 http: TLS handshake error from 10.244.1.1:43058: EOF 2018/08/06 12:13:02 http: TLS handshake error from 10.244.1.1:43064: EOF 2018/08/06 12:13:12 http: TLS handshake error from 10.244.1.1:43070: EOF 2018/08/06 12:13:22 http: TLS handshake error from 10.244.1.1:43076: EOF 2018/08/06 12:13:32 http: TLS handshake error from 10.244.1.1:43082: EOF 2018/08/06 12:13:42 http: TLS handshake error from 10.244.1.1:43088: EOF 2018/08/06 12:13:52 http: TLS handshake error from 10.244.1.1:43094: EOF 2018/08/06 12:14:02 http: TLS handshake error from 10.244.1.1:43100: EOF 2018/08/06 12:14:12 http: TLS handshake error from 10.244.1.1:43106: EOF 2018/08/06 12:14:22 http: TLS handshake error from 10.244.1.1:43112: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:10:33.525720Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.525761Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.525805Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543741Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543926Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: true\n" level=info timestamp=2018-08-06T12:10:33.543978Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.544040Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.544254Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.544366Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.544976Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.545065Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: true\n" level=info timestamp=2018-08-06T12:10:33.545167Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.545229Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.545317Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.545414Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat4brxw Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' Hello World! succeeded + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatflp84 Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 Hello World! + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 succeeded Pod name: netcatg67nr Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatkbwn2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatpj8r2 Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcatz66xg Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatzhvj2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi8c7gk-smpxk Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.096622Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:13.765963Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:13.849183Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:13.957717Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 1789e006-c3de-4ff2-a983-0d44160adcc7" level=info timestamp=2018-08-06T12:07:13.958058Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.108257Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.208185Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.229329Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.229559Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.281665Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.284938Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.286079Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.289098Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.978291Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.980571Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1789e006-c3de-4ff2-a983-0d44160adcc7: 225" Pod name: virt-launcher-testvmibhxqp-kdmmb Pod phase: Failed level=info timestamp=2018-08-06T12:07:39.045392Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:39.078047Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID df1800ae-57ef-462a-abb4-2a9aa34a9802" level=info timestamp=2018-08-06T12:07:39.078415Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:39.116518Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:39.855452Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:39.891003Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:40.046377Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.056441Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.077039Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.077365Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:40.202615Z pos=monitor.go:222 component=virt-launcher msg="Found PID for df1800ae-57ef-462a-abb4-2a9aa34a9802: 299" level=info timestamp=2018-08-06T12:07:40.240341Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.242081Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.244753Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.256140Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmim4bp4-rnb9w Pod phase: Failed level=info timestamp=2018-08-06T12:07:14.638019Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:16.016726Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:16.026885Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 5cbe4f6e-2b88-4d64-9327-2e3485920200" level=info timestamp=2018-08-06T12:07:16.027107Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:16.037599Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.045300Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.058420Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 5cbe4f6e-2b88-4d64-9327-2e3485920200: 236" level=info timestamp=2018-08-06T12:07:17.135503Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.150184Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.186169Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.257797Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:17.276045Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:17.277200Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.283082Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.366301Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmipmjsf-xxdp5 Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.018343Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.034855Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:14.064397Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.074077Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 281dac1d-d12d-4960-9cb6-0199e526dca0" level=info timestamp=2018-08-06T12:07:14.074477Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.704004Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.789191Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.790221Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.800440Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.800709Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.811729Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.867479Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.880743Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:15.038323Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:15.096857Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 281dac1d-d12d-4960-9cb6-0199e526dca0: 226" Pod name: virt-launcher-testvmiz64vc-lfhh2 Pod phase: Running level=info timestamp=2018-08-06T12:11:35.768002Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-08-06T12:11:35.768144Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-08-06T12:11:35.823457Z pos=virt-launcher.go:113 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmiz64vc" level=info timestamp=2018-08-06T12:11:35.823888Z pos=virt-launcher.go:59 component=virt-launcher msg="Marked as ready" • Failure [32.458 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address and slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:427 should configure custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:428 Expected error: <*errors.StatusError | 0xc420b50b40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:433 ------------------------------ STEP: checking eth0 MAC address Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:12:35 http: TLS handshake error from 10.244.0.1:56398: EOF 2018/08/06 12:12:45 http: TLS handshake error from 10.244.0.1:56458: EOF 2018/08/06 12:12:55 http: TLS handshake error from 10.244.0.1:56520: EOF 2018/08/06 12:13:05 http: TLS handshake error from 10.244.0.1:56580: EOF 2018/08/06 12:13:15 http: TLS handshake error from 10.244.0.1:56640: EOF 2018/08/06 12:13:25 http: TLS handshake error from 10.244.0.1:56700: EOF 2018/08/06 12:13:35 http: TLS handshake error from 10.244.0.1:56760: EOF 2018/08/06 12:13:45 http: TLS handshake error from 10.244.0.1:56820: EOF 2018/08/06 12:13:55 http: TLS handshake error from 10.244.0.1:56880: EOF 2018/08/06 12:14:05 http: TLS handshake error from 10.244.0.1:56940: EOF 2018/08/06 12:14:15 http: TLS handshake error from 10.244.0.1:57000: EOF 2018/08/06 12:14:25 http: TLS handshake error from 10.244.0.1:57060: EOF 2018/08/06 12:14:35 http: TLS handshake error from 10.244.0.1:57120: EOF 2018/08/06 12:14:45 http: TLS handshake error from 10.244.0.1:57180: EOF 2018/08/06 12:14:55 http: TLS handshake error from 10.244.0.1:57240: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:12:42 http: TLS handshake error from 10.244.1.1:43052: EOF 2018/08/06 12:12:52 http: TLS handshake error from 10.244.1.1:43058: EOF 2018/08/06 12:13:02 http: TLS handshake error from 10.244.1.1:43064: EOF 2018/08/06 12:13:12 http: TLS handshake error from 10.244.1.1:43070: EOF 2018/08/06 12:13:22 http: TLS handshake error from 10.244.1.1:43076: EOF 2018/08/06 12:13:32 http: TLS handshake error from 10.244.1.1:43082: EOF 2018/08/06 12:13:42 http: TLS handshake error from 10.244.1.1:43088: EOF 2018/08/06 12:13:52 http: TLS handshake error from 10.244.1.1:43094: EOF 2018/08/06 12:14:02 http: TLS handshake error from 10.244.1.1:43100: EOF 2018/08/06 12:14:12 http: TLS handshake error from 10.244.1.1:43106: EOF 2018/08/06 12:14:22 http: TLS handshake error from 10.244.1.1:43112: EOF 2018/08/06 12:14:32 http: TLS handshake error from 10.244.1.1:43118: EOF 2018/08/06 12:14:42 http: TLS handshake error from 10.244.1.1:43124: EOF 2018/08/06 12:14:52 http: TLS handshake error from 10.244.1.1:43130: EOF 2018/08/06 12:15:02 http: TLS handshake error from 10.244.1.1:43136: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:10:33.525720Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.525761Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.525805Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543741Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.543926Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: true\n" level=info timestamp=2018-08-06T12:10:33.543978Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.544040Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.544254Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.544366Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.544976Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:10:33.545065Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: true\n" level=info timestamp=2018-08-06T12:10:33.545167Z pos=vm.go:309 component=virt-handler msg="vmi is in phase: Failed\n" level=info timestamp=2018-08-06T12:10:33.545229Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:10:33.545317Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="No update processing required" level=info timestamp=2018-08-06T12:10:33.545414Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind= uid=8f7d83aa-9971-11e8-99c1-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat4brxw Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' Hello World! succeeded + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatflp84 Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 Hello World! + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 succeeded Pod name: netcatg67nr Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatkbwn2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatpj8r2 Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcatz66xg Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatzhvj2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.7 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi8c7gk-smpxk Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.096622Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:13.765963Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:13.849183Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:13.957717Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 1789e006-c3de-4ff2-a983-0d44160adcc7" level=info timestamp=2018-08-06T12:07:13.958058Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.108257Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.208185Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.229329Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.229559Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.281665Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.284938Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.286079Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.289098Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.978291Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi8c7gk kind= uid=2bed178b-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.980571Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1789e006-c3de-4ff2-a983-0d44160adcc7: 225" Pod name: virt-launcher-testvmibhxqp-kdmmb Pod phase: Failed level=info timestamp=2018-08-06T12:07:39.045392Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:39.078047Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID df1800ae-57ef-462a-abb4-2a9aa34a9802" level=info timestamp=2018-08-06T12:07:39.078415Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:39.116518Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:39.855452Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:39.891003Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:40.046377Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.056441Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.077039Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.077365Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:40.202615Z pos=monitor.go:222 component=virt-launcher msg="Found PID for df1800ae-57ef-462a-abb4-2a9aa34a9802: 299" level=info timestamp=2018-08-06T12:07:40.240341Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:40.242081Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:40.244753Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:40.256140Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibhxqp kind= uid=2bf05792-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmim4bp4-rnb9w Pod phase: Failed level=info timestamp=2018-08-06T12:07:14.638019Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:16.016726Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:16.026885Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 5cbe4f6e-2b88-4d64-9327-2e3485920200" level=info timestamp=2018-08-06T12:07:16.027107Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:16.037599Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.045300Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.058420Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 5cbe4f6e-2b88-4d64-9327-2e3485920200: 236" level=info timestamp=2018-08-06T12:07:17.135503Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.150184Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.186169Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:17.257797Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:17.276045Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:17.277200Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:17.283082Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:17.366301Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmim4bp4 kind= uid=2be6930e-9971-11e8-99c1-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmipmjsf-xxdp5 Pod phase: Failed level=info timestamp=2018-08-06T12:07:13.018343Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.034855Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-08-06T12:07:14.064397Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.074077Z pos=virt-launcher.go:184 component=virt-launcher msg="Detected domain with UUID 281dac1d-d12d-4960-9cb6-0199e526dca0" level=info timestamp=2018-08-06T12:07:14.074477Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-08-06T12:07:14.704004Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.789191Z pos=manager.go:292 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Domain started." level=info timestamp=2018-08-06T12:07:14.790221Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.800440Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:14.800709Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-08-06T12:07:14.811729Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:14.867479Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-08-06T12:07:14.880743Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-08-06T12:07:15.038323Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipmjsf kind= uid=2be9f408-9971-11e8-99c1-525500d15501 msg="Synced vmi" level=info timestamp=2018-08-06T12:07:15.096857Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 281dac1d-d12d-4960-9cb6-0199e526dca0: 226" Pod name: virt-launcher-testvmiz64vc-lfhh2 Pod phase: Running level=info timestamp=2018-08-06T12:11:35.768002Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-08-06T12:11:35.768144Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-08-06T12:11:35.823457Z pos=virt-launcher.go:113 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmiz64vc" level=info timestamp=2018-08-06T12:11:35.823888Z pos=virt-launcher.go:59 component=virt-launcher msg="Marked as ready" • Failure [32.448 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with disabled automatic attachment of interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:440 should not configure any external interfaces [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:441 Expected error: <*errors.StatusError | 0xc420350cf0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:448 ------------------------------ STEP: checking loopback is the only guest interface Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:14:15 http: TLS handshake error from 10.244.0.1:57000: EOF 2018/08/06 12:14:25 http: TLS handshake error from 10.244.0.1:57060: EOF 2018/08/06 12:14:35 http: TLS handshake error from 10.244.0.1:57120: EOF 2018/08/06 12:14:45 http: TLS handshake error from 10.244.0.1:57180: EOF 2018/08/06 12:14:55 http: TLS handshake error from 10.244.0.1:57240: EOF 2018/08/06 12:15:05 http: TLS handshake error from 10.244.0.1:57300: EOF 2018/08/06 12:15:15 http: TLS handshake error from 10.244.0.1:57360: EOF 2018/08/06 12:15:25 http: TLS handshake error from 10.244.0.1:57420: EOF 2018/08/06 12:15:35 http: TLS handshake error from 10.244.0.1:57480: EOF 2018/08/06 12:15:45 http: TLS handshake error from 10.244.0.1:57540: EOF 2018/08/06 12:15:55 http: TLS handshake error from 10.244.0.1:57600: EOF 2018/08/06 12:16:05 http: TLS handshake error from 10.244.0.1:57660: EOF 2018/08/06 12:16:15 http: TLS handshake error from 10.244.0.1:57720: EOF 2018/08/06 12:16:25 http: TLS handshake error from 10.244.0.1:57780: EOF 2018/08/06 12:16:35 http: TLS handshake error from 10.244.0.1:57840: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:14:12 http: TLS handshake error from 10.244.1.1:43106: EOF 2018/08/06 12:14:22 http: TLS handshake error from 10.244.1.1:43112: EOF 2018/08/06 12:14:32 http: TLS handshake error from 10.244.1.1:43118: EOF 2018/08/06 12:14:42 http: TLS handshake error from 10.244.1.1:43124: EOF 2018/08/06 12:14:52 http: TLS handshake error from 10.244.1.1:43130: EOF 2018/08/06 12:15:02 http: TLS handshake error from 10.244.1.1:43136: EOF 2018/08/06 12:15:12 http: TLS handshake error from 10.244.1.1:43142: EOF 2018/08/06 12:15:22 http: TLS handshake error from 10.244.1.1:43148: EOF 2018/08/06 12:15:32 http: TLS handshake error from 10.244.1.1:43154: EOF 2018/08/06 12:15:42 http: TLS handshake error from 10.244.1.1:43160: EOF 2018/08/06 12:15:52 http: TLS handshake error from 10.244.1.1:43166: EOF 2018/08/06 12:16:02 http: TLS handshake error from 10.244.1.1:43172: EOF 2018/08/06 12:16:12 http: TLS handshake error from 10.244.1.1:43178: EOF 2018/08/06 12:16:22 http: TLS handshake error from 10.244.1.1:43184: EOF 2018/08/06 12:16:32 http: TLS handshake error from 10.244.1.1:43190: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:15:06.744175Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.744318Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.744546Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.748046Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: false\n" level=info timestamp=2018-08-06T12:15:06.748239Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.748389Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.748613Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.772677Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmipmjsf, existing: false\n" level=info timestamp=2018-08-06T12:15:06.772792Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.772939Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.773219Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.801144Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: false\n" level=info timestamp=2018-08-06T12:15:06.801255Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.801400Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.801599Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." • Failure [93.359 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 92.032s. Expected error: <*errors.StatusError | 0xc420350240>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 ------------------------------ STEP: Starting a VirtualMachineInstance Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:15:45 http: TLS handshake error from 10.244.0.1:57540: EOF 2018/08/06 12:15:55 http: TLS handshake error from 10.244.0.1:57600: EOF 2018/08/06 12:16:05 http: TLS handshake error from 10.244.0.1:57660: EOF 2018/08/06 12:16:15 http: TLS handshake error from 10.244.0.1:57720: EOF 2018/08/06 12:16:25 http: TLS handshake error from 10.244.0.1:57780: EOF 2018/08/06 12:16:35 http: TLS handshake error from 10.244.0.1:57840: EOF 2018/08/06 12:16:45 http: TLS handshake error from 10.244.0.1:57900: EOF 2018/08/06 12:16:55 http: TLS handshake error from 10.244.0.1:57960: EOF 2018/08/06 12:17:05 http: TLS handshake error from 10.244.0.1:58020: EOF 2018/08/06 12:17:15 http: TLS handshake error from 10.244.0.1:58080: EOF 2018/08/06 12:17:25 http: TLS handshake error from 10.244.0.1:58140: EOF 2018/08/06 12:17:35 http: TLS handshake error from 10.244.0.1:58200: EOF 2018/08/06 12:17:45 http: TLS handshake error from 10.244.0.1:58260: EOF 2018/08/06 12:17:55 http: TLS handshake error from 10.244.0.1:58320: EOF 2018/08/06 12:18:05 http: TLS handshake error from 10.244.0.1:58380: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:15:42 http: TLS handshake error from 10.244.1.1:43160: EOF 2018/08/06 12:15:52 http: TLS handshake error from 10.244.1.1:43166: EOF 2018/08/06 12:16:02 http: TLS handshake error from 10.244.1.1:43172: EOF 2018/08/06 12:16:12 http: TLS handshake error from 10.244.1.1:43178: EOF 2018/08/06 12:16:22 http: TLS handshake error from 10.244.1.1:43184: EOF 2018/08/06 12:16:32 http: TLS handshake error from 10.244.1.1:43190: EOF 2018/08/06 12:16:42 http: TLS handshake error from 10.244.1.1:43196: EOF 2018/08/06 12:16:52 http: TLS handshake error from 10.244.1.1:43202: EOF 2018/08/06 12:17:02 http: TLS handshake error from 10.244.1.1:43208: EOF 2018/08/06 12:17:12 http: TLS handshake error from 10.244.1.1:43214: EOF 2018/08/06 12:17:22 http: TLS handshake error from 10.244.1.1:43220: EOF 2018/08/06 12:17:32 http: TLS handshake error from 10.244.1.1:43226: EOF 2018/08/06 12:17:42 http: TLS handshake error from 10.244.1.1:43232: EOF 2018/08/06 12:17:52 http: TLS handshake error from 10.244.1.1:43238: EOF 2018/08/06 12:18:02 http: TLS handshake error from 10.244.1.1:43244: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:15:06.744175Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.744318Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.744546Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.748046Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: false\n" level=info timestamp=2018-08-06T12:15:06.748239Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.748389Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.748613Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.772677Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmipmjsf, existing: false\n" level=info timestamp=2018-08-06T12:15:06.772792Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.772939Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.773219Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.801144Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: false\n" level=info timestamp=2018-08-06T12:15:06.801255Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.801400Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.801599Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." • Failure [92.491 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 92.027s. Expected error: <*errors.StatusError | 0xc420351320>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 ------------------------------ STEP: Starting a VirtualMachineInstance Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:17:15 http: TLS handshake error from 10.244.0.1:58080: EOF 2018/08/06 12:17:25 http: TLS handshake error from 10.244.0.1:58140: EOF 2018/08/06 12:17:35 http: TLS handshake error from 10.244.0.1:58200: EOF 2018/08/06 12:17:45 http: TLS handshake error from 10.244.0.1:58260: EOF 2018/08/06 12:17:55 http: TLS handshake error from 10.244.0.1:58320: EOF 2018/08/06 12:18:05 http: TLS handshake error from 10.244.0.1:58380: EOF 2018/08/06 12:18:15 http: TLS handshake error from 10.244.0.1:58440: EOF 2018/08/06 12:18:25 http: TLS handshake error from 10.244.0.1:58500: EOF 2018/08/06 12:18:35 http: TLS handshake error from 10.244.0.1:58560: EOF 2018/08/06 12:18:45 http: TLS handshake error from 10.244.0.1:58620: EOF 2018/08/06 12:18:55 http: TLS handshake error from 10.244.0.1:58680: EOF 2018/08/06 12:19:05 http: TLS handshake error from 10.244.0.1:58740: EOF 2018/08/06 12:19:15 http: TLS handshake error from 10.244.0.1:58800: EOF 2018/08/06 12:19:25 http: TLS handshake error from 10.244.0.1:58860: EOF 2018/08/06 12:19:35 http: TLS handshake error from 10.244.0.1:58920: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:17:22 http: TLS handshake error from 10.244.1.1:43220: EOF 2018/08/06 12:17:32 http: TLS handshake error from 10.244.1.1:43226: EOF 2018/08/06 12:17:42 http: TLS handshake error from 10.244.1.1:43232: EOF 2018/08/06 12:17:52 http: TLS handshake error from 10.244.1.1:43238: EOF 2018/08/06 12:18:02 http: TLS handshake error from 10.244.1.1:43244: EOF 2018/08/06 12:18:12 http: TLS handshake error from 10.244.1.1:43250: EOF 2018/08/06 12:18:22 http: TLS handshake error from 10.244.1.1:43256: EOF 2018/08/06 12:18:32 http: TLS handshake error from 10.244.1.1:43262: EOF 2018/08/06 12:18:42 http: TLS handshake error from 10.244.1.1:43268: EOF 2018/08/06 12:18:52 http: TLS handshake error from 10.244.1.1:43274: EOF 2018/08/06 12:19:02 http: TLS handshake error from 10.244.1.1:43280: EOF 2018/08/06 12:19:12 http: TLS handshake error from 10.244.1.1:43286: EOF 2018/08/06 12:19:22 http: TLS handshake error from 10.244.1.1:43292: EOF 2018/08/06 12:19:32 http: TLS handshake error from 10.244.1.1:43298: EOF 2018/08/06 12:19:42 http: TLS handshake error from 10.244.1.1:43304: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:15:06.744175Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.744318Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.744546Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.748046Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: false\n" level=info timestamp=2018-08-06T12:15:06.748239Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.748389Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.748613Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.772677Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmipmjsf, existing: false\n" level=info timestamp=2018-08-06T12:15:06.772792Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.772939Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.773219Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.801144Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: false\n" level=info timestamp=2018-08-06T12:15:06.801255Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.801400Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.801599Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." • Failure [92.604 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 92.030s. Expected error: <*errors.StatusError | 0xc4203505a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 ------------------------------ STEP: Starting and stopping the VirtualMachineInstance number of times STEP: Starting a VirtualMachineInstance Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:18:55 http: TLS handshake error from 10.244.0.1:58680: EOF 2018/08/06 12:19:05 http: TLS handshake error from 10.244.0.1:58740: EOF 2018/08/06 12:19:15 http: TLS handshake error from 10.244.0.1:58800: EOF 2018/08/06 12:19:25 http: TLS handshake error from 10.244.0.1:58860: EOF 2018/08/06 12:19:35 http: TLS handshake error from 10.244.0.1:58920: EOF 2018/08/06 12:19:45 http: TLS handshake error from 10.244.0.1:58980: EOF 2018/08/06 12:19:55 http: TLS handshake error from 10.244.0.1:59040: EOF 2018/08/06 12:20:05 http: TLS handshake error from 10.244.0.1:59100: EOF 2018/08/06 12:20:15 http: TLS handshake error from 10.244.0.1:59160: EOF 2018/08/06 12:20:25 http: TLS handshake error from 10.244.0.1:59220: EOF 2018/08/06 12:20:35 http: TLS handshake error from 10.244.0.1:59280: EOF 2018/08/06 12:20:45 http: TLS handshake error from 10.244.0.1:59340: EOF 2018/08/06 12:20:55 http: TLS handshake error from 10.244.0.1:59400: EOF 2018/08/06 12:21:05 http: TLS handshake error from 10.244.0.1:59460: EOF 2018/08/06 12:21:15 http: TLS handshake error from 10.244.0.1:59520: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:18:52 http: TLS handshake error from 10.244.1.1:43274: EOF 2018/08/06 12:19:02 http: TLS handshake error from 10.244.1.1:43280: EOF 2018/08/06 12:19:12 http: TLS handshake error from 10.244.1.1:43286: EOF 2018/08/06 12:19:22 http: TLS handshake error from 10.244.1.1:43292: EOF 2018/08/06 12:19:32 http: TLS handshake error from 10.244.1.1:43298: EOF 2018/08/06 12:19:42 http: TLS handshake error from 10.244.1.1:43304: EOF 2018/08/06 12:19:52 http: TLS handshake error from 10.244.1.1:43310: EOF 2018/08/06 12:20:02 http: TLS handshake error from 10.244.1.1:43316: EOF 2018/08/06 12:20:12 http: TLS handshake error from 10.244.1.1:43322: EOF 2018/08/06 12:20:22 http: TLS handshake error from 10.244.1.1:43328: EOF 2018/08/06 12:20:32 http: TLS handshake error from 10.244.1.1:43334: EOF 2018/08/06 12:20:42 http: TLS handshake error from 10.244.1.1:43340: EOF 2018/08/06 12:20:52 http: TLS handshake error from 10.244.1.1:43346: EOF 2018/08/06 12:21:02 http: TLS handshake error from 10.244.1.1:43352: EOF 2018/08/06 12:21:12 http: TLS handshake error from 10.244.1.1:43358: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:15:06.744175Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.744318Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.744546Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.748046Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: false\n" level=info timestamp=2018-08-06T12:15:06.748239Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.748389Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.748613Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.772677Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmipmjsf, existing: false\n" level=info timestamp=2018-08-06T12:15:06.772792Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.772939Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.773219Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.801144Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: false\n" level=info timestamp=2018-08-06T12:15:06.801255Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.801400Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.801599Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." • Failure [92.558 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 92.028s. Expected error: <*errors.StatusError | 0xc420350240>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 ------------------------------ STEP: Starting and stopping the VirtualMachineInstance number of times STEP: Starting a VirtualMachineInstance Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:20:25 http: TLS handshake error from 10.244.0.1:59220: EOF 2018/08/06 12:20:35 http: TLS handshake error from 10.244.0.1:59280: EOF 2018/08/06 12:20:45 http: TLS handshake error from 10.244.0.1:59340: EOF 2018/08/06 12:20:55 http: TLS handshake error from 10.244.0.1:59400: EOF 2018/08/06 12:21:05 http: TLS handshake error from 10.244.0.1:59460: EOF 2018/08/06 12:21:15 http: TLS handshake error from 10.244.0.1:59520: EOF 2018/08/06 12:21:25 http: TLS handshake error from 10.244.0.1:59580: EOF 2018/08/06 12:21:35 http: TLS handshake error from 10.244.0.1:59640: EOF 2018/08/06 12:21:45 http: TLS handshake error from 10.244.0.1:59700: EOF 2018/08/06 12:21:55 http: TLS handshake error from 10.244.0.1:59760: EOF 2018/08/06 12:22:05 http: TLS handshake error from 10.244.0.1:59820: EOF 2018/08/06 12:22:15 http: TLS handshake error from 10.244.0.1:59880: EOF 2018/08/06 12:22:25 http: TLS handshake error from 10.244.0.1:59940: EOF 2018/08/06 12:22:35 http: TLS handshake error from 10.244.0.1:60000: EOF 2018/08/06 12:22:45 http: TLS handshake error from 10.244.0.1:60060: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:20:22 http: TLS handshake error from 10.244.1.1:43328: EOF 2018/08/06 12:20:32 http: TLS handshake error from 10.244.1.1:43334: EOF 2018/08/06 12:20:42 http: TLS handshake error from 10.244.1.1:43340: EOF 2018/08/06 12:20:52 http: TLS handshake error from 10.244.1.1:43346: EOF 2018/08/06 12:21:02 http: TLS handshake error from 10.244.1.1:43352: EOF 2018/08/06 12:21:12 http: TLS handshake error from 10.244.1.1:43358: EOF 2018/08/06 12:21:22 http: TLS handshake error from 10.244.1.1:43364: EOF 2018/08/06 12:21:32 http: TLS handshake error from 10.244.1.1:43370: EOF 2018/08/06 12:21:42 http: TLS handshake error from 10.244.1.1:43376: EOF 2018/08/06 12:21:52 http: TLS handshake error from 10.244.1.1:43382: EOF 2018/08/06 12:22:02 http: TLS handshake error from 10.244.1.1:43388: EOF 2018/08/06 12:22:12 http: TLS handshake error from 10.244.1.1:43394: EOF 2018/08/06 12:22:22 http: TLS handshake error from 10.244.1.1:43400: EOF 2018/08/06 12:22:32 http: TLS handshake error from 10.244.1.1:43406: EOF 2018/08/06 12:22:42 http: TLS handshake error from 10.244.1.1:43412: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:15:06.744175Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.744318Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.744546Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.748046Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: false\n" level=info timestamp=2018-08-06T12:15:06.748239Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.748389Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.748613Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.772677Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmipmjsf, existing: false\n" level=info timestamp=2018-08-06T12:15:06.772792Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.772939Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.773219Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.801144Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: false\n" level=info timestamp=2018-08-06T12:15:06.801255Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.801400Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.801599Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." • Failure [92.522 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 Timed out after 92.028s. Expected error: <*errors.StatusError | 0xc420b50120>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 ------------------------------ STEP: Starting a VirtualMachineInstance Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:21:55 http: TLS handshake error from 10.244.0.1:59760: EOF 2018/08/06 12:22:05 http: TLS handshake error from 10.244.0.1:59820: EOF 2018/08/06 12:22:15 http: TLS handshake error from 10.244.0.1:59880: EOF 2018/08/06 12:22:25 http: TLS handshake error from 10.244.0.1:59940: EOF 2018/08/06 12:22:35 http: TLS handshake error from 10.244.0.1:60000: EOF 2018/08/06 12:22:45 http: TLS handshake error from 10.244.0.1:60060: EOF 2018/08/06 12:22:55 http: TLS handshake error from 10.244.0.1:60120: EOF 2018/08/06 12:23:05 http: TLS handshake error from 10.244.0.1:60180: EOF 2018/08/06 12:23:15 http: TLS handshake error from 10.244.0.1:60240: EOF 2018/08/06 12:23:25 http: TLS handshake error from 10.244.0.1:60300: EOF 2018/08/06 12:23:35 http: TLS handshake error from 10.244.0.1:60360: EOF 2018/08/06 12:23:45 http: TLS handshake error from 10.244.0.1:60420: EOF 2018/08/06 12:23:55 http: TLS handshake error from 10.244.0.1:60480: EOF 2018/08/06 12:24:05 http: TLS handshake error from 10.244.0.1:60540: EOF 2018/08/06 12:24:15 http: TLS handshake error from 10.244.0.1:60600: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:21:52 http: TLS handshake error from 10.244.1.1:43382: EOF 2018/08/06 12:22:02 http: TLS handshake error from 10.244.1.1:43388: EOF 2018/08/06 12:22:12 http: TLS handshake error from 10.244.1.1:43394: EOF 2018/08/06 12:22:22 http: TLS handshake error from 10.244.1.1:43400: EOF 2018/08/06 12:22:32 http: TLS handshake error from 10.244.1.1:43406: EOF 2018/08/06 12:22:42 http: TLS handshake error from 10.244.1.1:43412: EOF 2018/08/06 12:22:52 http: TLS handshake error from 10.244.1.1:43418: EOF 2018/08/06 12:23:02 http: TLS handshake error from 10.244.1.1:43424: EOF 2018/08/06 12:23:12 http: TLS handshake error from 10.244.1.1:43430: EOF 2018/08/06 12:23:22 http: TLS handshake error from 10.244.1.1:43436: EOF 2018/08/06 12:23:32 http: TLS handshake error from 10.244.1.1:43442: EOF 2018/08/06 12:23:42 http: TLS handshake error from 10.244.1.1:43448: EOF 2018/08/06 12:23:52 http: TLS handshake error from 10.244.1.1:43454: EOF 2018/08/06 12:24:02 http: TLS handshake error from 10.244.1.1:43460: EOF 2018/08/06 12:24:12 http: TLS handshake error from 10.244.1.1:43466: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:15:06.744175Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.744318Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.744546Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.748046Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: false\n" level=info timestamp=2018-08-06T12:15:06.748239Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.748389Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.748613Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.772677Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmipmjsf, existing: false\n" level=info timestamp=2018-08-06T12:15:06.772792Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.772939Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.773219Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.801144Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: false\n" level=info timestamp=2018-08-06T12:15:06.801255Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.801400Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.801599Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." • Failure [92.559 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 Timed out after 92.033s. Expected error: <*errors.StatusError | 0xc420b50e10>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 ------------------------------ STEP: Starting a VirtualMachineInstance Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:23:25 http: TLS handshake error from 10.244.0.1:60300: EOF 2018/08/06 12:23:35 http: TLS handshake error from 10.244.0.1:60360: EOF 2018/08/06 12:23:45 http: TLS handshake error from 10.244.0.1:60420: EOF 2018/08/06 12:23:55 http: TLS handshake error from 10.244.0.1:60480: EOF 2018/08/06 12:24:05 http: TLS handshake error from 10.244.0.1:60540: EOF 2018/08/06 12:24:15 http: TLS handshake error from 10.244.0.1:60600: EOF 2018/08/06 12:24:25 http: TLS handshake error from 10.244.0.1:60660: EOF 2018/08/06 12:24:35 http: TLS handshake error from 10.244.0.1:60720: EOF 2018/08/06 12:24:45 http: TLS handshake error from 10.244.0.1:60780: EOF 2018/08/06 12:24:55 http: TLS handshake error from 10.244.0.1:60840: EOF 2018/08/06 12:25:05 http: TLS handshake error from 10.244.0.1:60900: EOF 2018/08/06 12:25:15 http: TLS handshake error from 10.244.0.1:60960: EOF 2018/08/06 12:25:25 http: TLS handshake error from 10.244.0.1:32788: EOF 2018/08/06 12:25:35 http: TLS handshake error from 10.244.0.1:32848: EOF 2018/08/06 12:25:45 http: TLS handshake error from 10.244.0.1:32908: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:23:32 http: TLS handshake error from 10.244.1.1:43442: EOF 2018/08/06 12:23:42 http: TLS handshake error from 10.244.1.1:43448: EOF 2018/08/06 12:23:52 http: TLS handshake error from 10.244.1.1:43454: EOF 2018/08/06 12:24:02 http: TLS handshake error from 10.244.1.1:43460: EOF 2018/08/06 12:24:12 http: TLS handshake error from 10.244.1.1:43466: EOF 2018/08/06 12:24:22 http: TLS handshake error from 10.244.1.1:43472: EOF 2018/08/06 12:24:32 http: TLS handshake error from 10.244.1.1:43478: EOF 2018/08/06 12:24:42 http: TLS handshake error from 10.244.1.1:43484: EOF 2018/08/06 12:24:52 http: TLS handshake error from 10.244.1.1:43490: EOF 2018/08/06 12:25:02 http: TLS handshake error from 10.244.1.1:43496: EOF 2018/08/06 12:25:12 http: TLS handshake error from 10.244.1.1:43502: EOF 2018/08/06 12:25:22 http: TLS handshake error from 10.244.1.1:43508: EOF 2018/08/06 12:25:32 http: TLS handshake error from 10.244.1.1:43514: EOF 2018/08/06 12:25:42 http: TLS handshake error from 10.244.1.1:43520: EOF 2018/08/06 12:25:52 http: TLS handshake error from 10.244.1.1:43526: EOF Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:15:06.744175Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.744318Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.744546Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.748046Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: false\n" level=info timestamp=2018-08-06T12:15:06.748239Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.748389Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.748613Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.772677Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmipmjsf, existing: false\n" level=info timestamp=2018-08-06T12:15:06.772792Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.772939Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.773219Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.801144Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: false\n" level=info timestamp=2018-08-06T12:15:06.801255Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.801400Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.801599Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." • Failure [92.560 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 Timed out after 92.032s. Expected error: <*errors.StatusError | 0xc4206786c0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 ------------------------------ STEP: Starting a VirtualMachineInstance Pod name: disks-images-provider-5pdcr Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-8bqjc Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-9pbmx Pod phase: Running 2018/08/06 12:25:55 http: TLS handshake error from 10.244.0.1:32968: EOF 2018/08/06 12:26:05 http: TLS handshake error from 10.244.0.1:33028: EOF 2018/08/06 12:26:15 http: TLS handshake error from 10.244.0.1:33088: EOF 2018/08/06 12:26:25 http: TLS handshake error from 10.244.0.1:33148: EOF 2018/08/06 12:26:35 http: TLS handshake error from 10.244.0.1:33208: EOF 2018/08/06 12:26:45 http: TLS handshake error from 10.244.0.1:33268: EOF level=info timestamp=2018-08-06T12:26:53.450653Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T12:26:53.484196Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 12:26:55 http: TLS handshake error from 10.244.0.1:33332: EOF 2018/08/06 12:27:05 http: TLS handshake error from 10.244.0.1:33392: EOF 2018/08/06 12:27:15 http: TLS handshake error from 10.244.0.1:33452: EOF level=info timestamp=2018-08-06T12:27:23.116604Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T12:27:23.134083Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T12:27:23.241513Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 12:27:25 http: TLS handshake error from 10.244.0.1:33512: EOF Pod name: virt-api-bcc6b587d-z6t6r Pod phase: Running 2018/08/06 12:26:12 http: TLS handshake error from 10.244.1.1:43538: EOF 2018/08/06 12:26:22 http: TLS handshake error from 10.244.1.1:43544: EOF 2018/08/06 12:26:32 http: TLS handshake error from 10.244.1.1:43550: EOF 2018/08/06 12:26:42 http: TLS handshake error from 10.244.1.1:43556: EOF 2018/08/06 12:26:52 http: TLS handshake error from 10.244.1.1:43562: EOF level=info timestamp=2018-08-06T12:26:55.982245Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T12:27:01.811699Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-08-06T12:27:01.816646Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/08/06 12:27:02 http: TLS handshake error from 10.244.1.1:43568: EOF level=info timestamp=2018-08-06T12:27:02.689448Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T12:27:11.959733Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/08/06 12:27:12 http: TLS handshake error from 10.244.1.1:43574: EOF 2018/08/06 12:27:22 http: TLS handshake error from 10.244.1.1:43580: EOF level=info timestamp=2018-08-06T12:27:23.927245Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-08-06T12:27:26.058460Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-67dcdd8464-68pjr Pod phase: Running level=info timestamp=2018-08-06T12:04:46.713705Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 level=info timestamp=2018-08-06T12:10:16.213539Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmirsInformer" level=info timestamp=2018-08-06T12:10:16.217157Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:10:16.217265Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmInformer" level=info timestamp=2018-08-06T12:10:16.217381Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer limitrangeInformer" level=info timestamp=2018-08-06T12:10:16.217463Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiInformer" level=info timestamp=2018-08-06T12:10:16.217505Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtPodInformer" level=info timestamp=2018-08-06T12:10:16.219271Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer kubeVirtNodeInformer" level=info timestamp=2018-08-06T12:10:16.220234Z pos=virtinformers.go:107 component=virt-controller service=http msg="STARTING informer vmiPresetInformer" level=info timestamp=2018-08-06T12:10:16.221438Z pos=vm.go:85 component=virt-controller service=http msg="Starting VirtualMachine controller." level=info timestamp=2018-08-06T12:10:16.243423Z pos=node.go:104 component=virt-controller service=http msg="Starting node controller." level=info timestamp=2018-08-06T12:10:16.250160Z pos=vmi.go:129 component=virt-controller service=http msg="Starting vmi controller." level=info timestamp=2018-08-06T12:10:16.250471Z pos=replicaset.go:111 component=virt-controller service=http msg="Starting VirtualMachineInstanceReplicaSet controller." level=info timestamp=2018-08-06T12:10:16.251701Z pos=preset.go:74 component=virt-controller service=http msg="Starting Virtual Machine Initializer." Pod name: virt-controller-67dcdd8464-vhg8t Pod phase: Running level=info timestamp=2018-08-06T12:10:32.868380Z pos=application.go:177 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-hjk99 Pod phase: Running level=info timestamp=2018-08-06T12:04:44.019220Z pos=virt-handler.go:89 component=virt-handler hostname=node01 level=info timestamp=2018-08-06T12:04:44.098096Z pos=virtinformers.go:107 component=virt-handler msg="STARTING informer kubevirtconfigs" level=info timestamp=2018-08-06T12:04:44.299298Z pos=vm.go:208 component=virt-handler msg="Starting virt-handler controller." level=info timestamp=2018-08-06T12:04:44.302724Z pos=cache.go:151 component=virt-handler msg="Synchronizing domains" Pod name: virt-handler-ncdgb Pod phase: Running level=info timestamp=2018-08-06T12:15:06.744175Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.744318Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.744546Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmibhxqp kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.748046Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmim4bp4, existing: false\n" level=info timestamp=2018-08-06T12:15:06.748239Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.748389Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.748613Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmim4bp4 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.772677Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmipmjsf, existing: false\n" level=info timestamp=2018-08-06T12:15:06.772792Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.772939Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.773219Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmipmjsf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-08-06T12:15:06.801144Z pos=vm.go:307 component=virt-handler msg="Processing vmi testvmiz64vc, existing: false\n" level=info timestamp=2018-08-06T12:15:06.801255Z pos=vm.go:323 component=virt-handler msg="Domain: existing: false\n" level=info timestamp=2018-08-06T12:15:06.801400Z pos=vm.go:407 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-08-06T12:15:06.801599Z pos=vm.go:434 component=virt-handler namespace=kubevirt-test-default name=testvmiz64vc kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." • Failure [92.836 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 Timed out after 92.058s. Expected error: <*errors.StatusError | 0xc420350240>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "Timeout: request did not complete within allowed duration", Reason: "Timeout", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 504, }, } Timeout: request did not complete within allowed duration not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 ------------------------------ STEP: Starting the VirtualMachineInstance STEP: Starting a VirtualMachineInstance 2018/08/06 08:30:22 read closing down: EOF • [SLOW TEST:174.615 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 ------------------------------ • [SLOW TEST:37.873 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ 2018/08/06 08:31:47 read closing down: EOF Service cluster-ip-vmi successfully exposed for virtualmachineinstance testvmikp6xp • [SLOW TEST:51.465 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service cluster-ip-target-vmi successfully exposed for virtualmachineinstance testvmikp6xp •Service node-port-vmi successfully exposed for virtualmachineinstance testvmikp6xp ------------------------------ • [SLOW TEST:9.453 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:124 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:129 ------------------------------ 2018/08/06 08:32:46 read closing down: EOF Service cluster-ip-udp-vmi successfully exposed for virtualmachineinstance testvmitkkkq • [SLOW TEST:50.571 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:173 Should expose a ClusterIP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:177 ------------------------------ Service node-port-udp-vmi successfully exposed for virtualmachineinstance testvmitkkkq • [SLOW TEST:9.594 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:205 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:210 ------------------------------ 2018/08/06 08:33:51 read closing down: EOF 2018/08/06 08:34:02 read closing down: EOF Service cluster-ip-vmirs successfully exposed for vmirs replicasetns6bx • [SLOW TEST:63.444 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VMI replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:253 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:286 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:290 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachine testvmithf88 VM testvmithf88 was scheduled to start 2018/08/06 08:34:55 read closing down: EOF • [SLOW TEST:56.858 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:318 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:362 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:363 ------------------------------ • [SLOW TEST:19.334 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ •• ------------------------------ • [SLOW TEST:24.642 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:22.103 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:25.944 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ 2018/08/06 08:37:23 read closing down: EOF 2018/08/06 08:38:13 read closing down: EOF 2018/08/06 08:38:15 read closing down: EOF • [SLOW TEST:101.383 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •2018/08/06 08:38:17 read closing down: EOF •• ------------------------------ • [SLOW TEST:25.421 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ •• ------------------------------ • [SLOW TEST:46.133 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:62.318 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:30.933 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ • [SLOW TEST:172.224 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 ------------------------------ • [SLOW TEST:52.622 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ • [SLOW TEST:187.300 seconds] VirtualMachine 2018/08/06 08:48:00 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 2018/08/06 08:48:00 read closing down: EOF ------------------------------ 2018/08/06 08:48:00 read closing down: EOF VM testvmiqw2cs was scheduled to start • [SLOW TEST:24.678 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ VM testvmittwq7 was scheduled to stop • [SLOW TEST:31.819 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ • [SLOW TEST:95.534 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:27.064 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:28.928 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ ••••••••••• ------------------------------ • [SLOW TEST:5.881 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 with correct permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:51 should be allowed to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:52 ------------------------------ • ------------------------------ • [SLOW TEST:5.419 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization For Version Command /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:63 with authenticated user /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:66 should be allowed to access subresource version endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:67 ------------------------------ • ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 should succeed to generate a VM JSON file using oc-process command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:150 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1381 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 should succeed to create a VM using oc-create command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:156 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1381 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 should succeed to launch a VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:161 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1381 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 with given VMI from the VM /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:163 should succeed to terminate the VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:166 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1381 ------------------------------ 2018/08/06 08:52:45 read closing down: EOF • [SLOW TEST:50.922 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 ------------------------------ • ------------------------------ • [SLOW TEST:26.617 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:78 ------------------------------ • [SLOW TEST:24.148 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:84 ------------------------------ •••• 2018/08/06 08:54:18 read closing down: EOF ------------------------------ • [SLOW TEST:40.293 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:172 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/08/06 08:54:52 read closing down: EOF • [SLOW TEST:33.956 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:172 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.836 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:203 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:204 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:205 ------------------------------ • [SLOW TEST:22.614 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:203 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:204 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:235 ------------------------------ • [SLOW TEST:48.890 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:283 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:284 ------------------------------ • [SLOW TEST:30.998 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:307 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:308 ------------------------------ • [SLOW TEST:8.362 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:338 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:339 ------------------------------ • [SLOW TEST:92.180 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:369 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:408 ------------------------------ • [SLOW TEST:25.946 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 with node tainted /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:461 the vmi with tolerations should be scheduled /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:483 ------------------------------ • ------------------------------ • [SLOW TEST:44.859 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:533 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:49.357 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:533 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.128 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:590 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:602 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:598 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.110 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:72 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:590 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:639 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:598 ------------------------------ •• ------------------------------ • [SLOW TEST:24.166 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:748 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:749 ------------------------------ • [SLOW TEST:52.730 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:780 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:781 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:782 ------------------------------ 2018/08/06 09:02:39 read closing down: EOF • [SLOW TEST:50.279 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:780 with ACPI and 0 grace period seconds /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:806 should result in vmi status failed /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:807 ------------------------------ 2018/08/06 09:03:29 read closing down: EOF • [SLOW TEST:52.880 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:780 with ACPI and some grace period seconds /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:831 should result in vmi status succeeded /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:832 ------------------------------ • [SLOW TEST:31.531 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:780 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:856 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:857 ------------------------------ • [SLOW TEST:35.215 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:908 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:909 ------------------------------ • [SLOW TEST:34.120 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:51 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:908 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:936 ------------------------------ • [SLOW TEST:13.659 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:13.164 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:13.465 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:13.334 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/08/06 09:06:57 read closing down: EOF • [SLOW TEST:49.680 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:68 ------------------------------ 2018/08/06 09:07:45 read closing down: EOF • [SLOW TEST:48.408 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:78 ------------------------------ 2018/08/06 09:08:22 read closing down: EOF 2018/08/06 09:08:23 read closing down: EOF 2018/08/06 09:08:23 read closing down: EOF 2018/08/06 09:08:24 read closing down: EOF • [SLOW TEST:39.193 seconds] Console 2018/08/06 09:08:24 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:87 ------------------------------ • [SLOW TEST:26.747 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should wait until the virtual machine is in running state and return a stream interface /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:103 ------------------------------ • [SLOW TEST:30.386 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should fail waiting for the virtual machine instance to be running /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:111 ------------------------------ • [SLOW TEST:30.347 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should fail waiting for the expecter /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:134 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.013 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1343 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.012 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1343 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.010 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1343 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.010 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1343 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.011 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1343 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.011 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1343 ------------------------------ • ------------------------------ • [SLOW TEST:7.056 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:26.299 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ • [SLOW TEST:17.841 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove VMIs once it is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:169 ------------------------------ • ------------------------------ • [SLOW TEST:5.654 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 ------------------------------ • [SLOW TEST:6.069 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove the finished VM /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:279 ------------------------------ • ------------------------------ • [SLOW TEST:44.676 seconds] CloudInit UserData 2018/08/06 09:11:49 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 ------------------------------ • [SLOW TEST:101.628 seconds] CloudInit UserData 2018/08/06 09:13:30 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 ------------------------------ 2018/08/06 09:14:16 read closing down: EOF • [SLOW TEST:56.114 seconds] 2018/08/06 09:14:26 read closing down: EOF CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:118 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:119 ------------------------------ 2018/08/06 09:15:11 read closing down: EOF • [SLOW TEST:44.492 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:162 ------------------------------ volumedisk0 compute • [SLOW TEST:38.519 seconds] Configurations 2018/08/06 09:15:49 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 ------------------------------ • ------------------------------ • [SLOW TEST:25.223 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:164 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.256 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:164 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:216 ------------------------------ • ------------------------------ • [SLOW TEST:126.923 seconds] Configurations 2018/08/06 09:18:24 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:340 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:341 ------------------------------ • [SLOW TEST:118.431 seconds] 2018/08/06 09:20:23 read closing down: EOF Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model equals to passthrough /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:368 should report exactly the same model as node CPU /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:369 ------------------------------ • [SLOW TEST:116.126 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:294 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:392 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:393 2018/08/06 09:22:19 read closing down: EOF ------------------------------ • [SLOW TEST:49.903 seconds] 2018/08/06 09:23:09 read closing down: EOF Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:413 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:436 ------------------------------ Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 14 Failures: [Fail] Networking VirtualMachineInstance with custom interface model [It] should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:376 [Fail] Networking VirtualMachineInstance with default interface model [It] should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:365 [Fail] Networking VirtualMachineInstance with custom MAC address [It] should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:407 [Fail] Networking VirtualMachineInstance with custom MAC address in non-conventional format [It] should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:420 [Fail] Networking VirtualMachineInstance with custom MAC address and slirp interface [It] should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:433 [Fail] Networking VirtualMachineInstance with disabled automatic attachment of interfaces [It] should not configure any external interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:448 [Fail] Storage Starting a VirtualMachineInstance with Alpine PVC should be successfully started [It] with Disk PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 [Fail] Storage Starting a VirtualMachineInstance with Alpine PVC should be successfully started [It] with CDRom PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 [Fail] Storage Starting a VirtualMachineInstance with Alpine PVC should be successfully started and stopped multiple times [It] with Disk PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 [Fail] Storage Starting a VirtualMachineInstance with Alpine PVC should be successfully started and stopped multiple times [It] with CDRom PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 [Fail] Storage Starting a VirtualMachineInstance With an emptyDisk defined [It] should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 [Fail] Storage Starting a VirtualMachineInstance With an emptyDisk defined and a specified serial number [It] should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 [Fail] Storage Starting a VirtualMachineInstance With ephemeral alpine PVC [It] should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 [Fail] Storage Starting a VirtualMachineInstance With ephemeral alpine PVC [It] should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:64 Ran 135 of 148 Specs in 4613.953 seconds FAIL! -- 121 Passed | 14 Failed | 0 Pending | 13 Skipped --- FAIL: TestTests (4613.97s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh