+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + echo 1 automation/test.sh: line 46: /proc/sys/net/bridge/bridge-nf-call-iptables: Permission denied + true + echo 1 automation/test.sh: line 47: /proc/sys/net/ipv4/ip_forward: Permission denied + true + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/24 13:45:09 Waiting for host: 192.168.66.101:22 2018/07/24 13:45:12 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/24 13:45:20 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/24 13:45:25 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0724 13:45:26.613037 1240 feature_gate.go:230] feature gates: &{map[]} I0724 13:45:26.709902 1240 kernel_validator.go:81] Validating kernel version I0724 13:45:26.710060 1240 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 53.528045 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:f796eadf5b92e5c6c28777c237d9ce821703b6eb4abbcd85ef87e2167a222ec9 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted 2018/07/24 13:46:36 Waiting for host: 192.168.66.102:22 2018/07/24 13:46:39 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/24 13:46:47 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/24 13:46:52 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support [WARNING FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1 I0724 13:46:52.843891 1081 kernel_validator.go:81] Validating kernel version I0724 13:46:52.844197 1081 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 53s v1.11.0 node02 Ready 21s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 54s v1.11.0 node02 Ready 22s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:34521/kubevirt/virt-controller:devel Untagged: localhost:34521/kubevirt/virt-controller@sha256:6c0733e474c5f4b018965fbf29aea3b3afb7711169d9a27d84aafd56b2f06802 Deleted: sha256:5317014f628828e51561f261012b309bc4d0e8254d584bda133b518c3d56bb30 Deleted: sha256:fd8f29e218af076f8fa9da79920ec433cfe8d5730e591fc670860a6afbbd9759 Deleted: sha256:5afdf7da05212dc9ad1c6ce93f4a43aabbf8d574d10f3a2b114b4462677391c0 Deleted: sha256:67d31e147cafdb66021550d94e6611e75104694e7d44aeb6ef58fb9b1be78059 Untagged: localhost:34521/kubevirt/virt-launcher:devel Untagged: localhost:34521/kubevirt/virt-launcher@sha256:4451326bef5bef0f9dd79b1324de6b46e2acc070298711733b00e153487f6d15 Deleted: sha256:cd6358b6e8acfdfc4ee82a92e16aa285e3a493a5817f456f901edd8bd420bd90 Deleted: sha256:52d8722acc0724541d502dabe24d253c87bc81109fa57a453efd149ec91828bf Deleted: sha256:a6442b91c8d58079b9c913889fb2694d88f35853d7f1a33d5570cd172146dfc9 Deleted: sha256:31bc296b656a021e1f1c04e9cb23ca234a5943e4d7a8997e807a7481a4386624 Deleted: sha256:ff8fc88cd7dd1b1dfe4c280b97670f6fe97b121853ec1cbf248de135f2bd7a82 Deleted: sha256:f1b9011cf22a8fe5a403decdb9a3df6877805356db9570d1823f91488e58f9ea Deleted: sha256:0f80d65383eceaf82378165ca89545146a2febc8e1f552155516224bb3dc1b2f Deleted: sha256:806150bf21b29811a268b0b8a546b56f6c84f30bebd3ac39bd9091fcd01e5008 Deleted: sha256:8a272b10dd0d8b6cbfe36645fdb0302bd3cd224d42c9ae2582c744a9c21e5560 Deleted: sha256:60c3fc4eee06e8ef519d0bea02a0cf921d5263946dec1f7392ada75940c71616 Deleted: sha256:c319c0b0198a2dc381b71c4d94ff2cec7a000d7f87c01aedeff3fcc4f56903ca Deleted: sha256:59513b219705c77145864f290779f308cfc3f1ad19bfeed4dea27a83a43f1105 Untagged: localhost:34521/kubevirt/virt-handler:devel Untagged: localhost:34521/kubevirt/virt-handler@sha256:5d51ecdf9c4a7957c9600a10c23f30b149ec729c32bfc934aa6aac0f1945ffbf Deleted: sha256:d524a5141709d1f31946a83817e0512ba55749f802fb9dc43bc4c816217d965c Deleted: sha256:50e458edcf5ba843bf1ed3e2d1c7ef7f4acf03b8b50cf62add79225e2abe5e5f Deleted: sha256:f2792dac4c5c85534bd659c4c8dc0647ae9e28ee9ab2d2b88a5c899c35c6241f Deleted: sha256:73192f9b6b2102abaf854b4e0a01fa4fadce45b7dfe3a7b73a4d5d0fc6066d21 Untagged: localhost:34521/kubevirt/virt-api:devel Untagged: localhost:34521/kubevirt/virt-api@sha256:f8bbb1bac034fe9ccb7a5821d85d76846386d8e89a10eb386cbd81833ed9e5f5 Deleted: sha256:43ea75390a58af3d3afa8619cd639d6729ae09b2bf7c4868b1705d21c075893a Deleted: sha256:c41cfed12721c00a2a979c94d8a2de12597cb8cf6fcce05968e93d04cf3dd4e3 Deleted: sha256:5027c627fc242396a2d274968e64a569ac3080f68e2c09bb6617b85ed5da4fd8 Deleted: sha256:0ec64af9ab7bd4fff2c89cecd00b93acc0e25108ec2c43ed1e2a6c30b27950b2 Untagged: localhost:34521/kubevirt/subresource-access-test:devel Untagged: localhost:34521/kubevirt/subresource-access-test@sha256:604e6262b8fac22f09a01c9845e1f8b61153729eeb9a2f33887519ee8a268b88 Deleted: sha256:ea0c092ad861d6799ad69fea2fc5f6722b1e45b8101b76d683021e6b91f730b6 Deleted: sha256:afc13b4cd43bae65493858e162fd1c146ea0363c0fb831a820175c2bb641768b Deleted: sha256:50a71e74db479718dffdc91fb49092d955bf25f286658a83e81166f5b4d221cd Deleted: sha256:dacb8afc79fd6d4c87181a7480374a292879c3ca92b4ce29062938ffc0d4afaf Untagged: localhost:34521/kubevirt/example-hook-sidecar:devel Untagged: localhost:34521/kubevirt/example-hook-sidecar@sha256:fbd0e9cfb9bbd857b11fef96ec30a3e40075512c57fe87c94d9147c809570f63 Deleted: sha256:ebfa85056bcb67ee30e46a30be8527c24337066cc315b1898f47f9c8abd3c12a Deleted: sha256:0f691707f1b4a2d7d9bb75613760cc90e3aebada13cff9eafb0a59c123ffac49 Deleted: sha256:8e57171f200ec53d686858679cb62b38a9949ac0047860faebe99bcc284de283 Deleted: sha256:3f5e2a70c5ef2f89c0708d0f23960d0e8c8e43bdb90d8c000b56aee14e8cbb3d sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.35 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> e9589b9dbfb3 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 6526953b7273 Step 5/8 : USER 1001 ---> Using cache ---> 0da81e671cc6 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 61f6ad891632 Removing intermediate container c7b82ef61580 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 334c8b22daaf ---> 9f05141c87de Removing intermediate container 334c8b22daaf Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-controller" '' ---> Running in c36dfd384715 ---> c97075e9bdb5 Removing intermediate container c36dfd384715 Successfully built c97075e9bdb5 Sending build context to Docker daemon 42.63 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 8826ac178c51 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 5eb474bfa821 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> a56916b8c671 Removing intermediate container baf6896221b3 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 3dc160e7546d Removing intermediate container fe363a69e774 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in eaf60c3cadc7  ---> edabe14e5259 Removing intermediate container eaf60c3cadc7 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in b330d3e05e10  ---> 59925ae49858 Removing intermediate container b330d3e05e10 Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 1f012fe2ef27 Removing intermediate container b97cba207314 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in dd197b0731e6 ---> 8b8a13885d22 Removing intermediate container dd197b0731e6 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-launcher" '' ---> Running in ae1876dfe75e ---> 6930c4ce6387 Removing intermediate container ae1876dfe75e Successfully built 6930c4ce6387 Sending build context to Docker daemon 41.65 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 560e746b237b Removing intermediate container 825fa13bbfb6 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 4664df1a62cc ---> f13c85841e65 Removing intermediate container 4664df1a62cc Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-handler" '' ---> Running in c1c5f2ea111f ---> 6e57849c1fdc Removing intermediate container c1c5f2ea111f Successfully built 6e57849c1fdc Sending build context to Docker daemon 38.75 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 1a58ff1483fa Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 87e30c5b4065 Step 5/8 : USER 1001 ---> Using cache ---> e889af541bd0 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 81563c31872d Removing intermediate container 9b3757fd10d7 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 6066393c8b5c ---> bb89bc1b4135 Removing intermediate container 6066393c8b5c Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-api" '' ---> Running in b46e21d8ec61 ---> 5358bd9971b8 Removing intermediate container b46e21d8ec61 Successfully built 5358bd9971b8 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/7 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 8e1d737ded1f Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 104e48aa676f Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 4ed9f69e6653 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> ed1b9e5567fa Successfully built ed1b9e5567fa Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/5 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> d130857891a9 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "vm-killer" '' ---> Using cache ---> 8542ee073f34 Successfully built 8542ee073f34 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 496290160351 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 3b36b527fef8 Step 3/7 : ENV container docker ---> Using cache ---> b3ada414d649 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 337be6171fcb Step 5/7 : ADD entry-point.sh / ---> Using cache ---> a98a961fa5a1 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 19baf5d1aab8 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "registry-disk-v1alpha" '' ---> Using cache ---> c5beb7cd5a8d Successfully built c5beb7cd5a8d Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:35107/kubevirt/registry-disk-v1alpha:devel ---> c5beb7cd5a8d Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> db7121890616 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> bc61b0ba5de9 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> 945efb2765fb Successfully built 945efb2765fb Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:35107/kubevirt/registry-disk-v1alpha:devel ---> c5beb7cd5a8d Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 8690e3a5f7c3 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 48f51a1ee0e4 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> 4fa62bd9d59b Successfully built 4fa62bd9d59b Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:35107/kubevirt/registry-disk-v1alpha:devel ---> c5beb7cd5a8d Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 8690e3a5f7c3 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 836b07aa8726 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> cf2d0ee1d2d4 Successfully built cf2d0ee1d2d4 Sending build context to Docker daemon 35.56 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> f9cd90a6a0ef Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> df6f2d83c1d6 Step 5/8 : USER 1001 ---> Using cache ---> 56a7b7e6b8ff Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 07f01db10e01 Removing intermediate container 876f1d63db39 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 140832ccd7cc ---> 560b6f85ae66 Removing intermediate container 140832ccd7cc Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "subresource-access-test" '' ---> Running in 759c8c56ec3f ---> 03b7da2e696f Removing intermediate container 759c8c56ec3f Successfully built 03b7da2e696f Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/9 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> c1e9e769c4ba Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 6729c465203a Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 2aee087083e8 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> e3795172dd73 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 0de2fc4b917f Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "winrmcli" '' ---> Using cache ---> 2e42a0f7628f Successfully built 2e42a0f7628f Sending build context to Docker daemon 36.77 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b730b4ed65df Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> eec5d75a0cac Removing intermediate container 2b705d7ba970 Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 5faf33c2de60 ---> f96738fcb41f Removing intermediate container 5faf33c2de60 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Running in 126d6daca804 ---> ff68ffa87a7a Removing intermediate container 126d6daca804 Successfully built ff68ffa87a7a hack/build-docker.sh push The push refers to a repository [localhost:35107/kubevirt/virt-controller] 6cecddb356f5: Preparing ff9b9e61b9df: Preparing 891e1e4ef82a: Preparing ff9b9e61b9df: Pushed 6cecddb356f5: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:a5c0dbf1d12398c0e15fba785ff364c7426ff37f0e150bd036c9177ab1ee84da size: 949 The push refers to a repository [localhost:35107/kubevirt/virt-launcher] b14273a5a884: Preparing 87589ae34470: Preparing 87e606ea19e4: Preparing db6d7d16f318: Preparing 94b6f40eac4f: Preparing cfcba35fba84: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing 186d8b3e4fd8: Waiting b83399358a92: Waiting cfcba35fba84: Waiting 5eefb9960a36: Waiting db6d7d16f318: Pushed 87589ae34470: Pushed b14273a5a884: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 87e606ea19e4: Pushed cfcba35fba84: Pushed 94b6f40eac4f: Pushed 5eefb9960a36: Pushed devel: digest: sha256:327f8cd0072c786560eefb5ace161b725ce6bb3d2a03335bab7705081ee575a0 size: 2828 The push refers to a repository [localhost:35107/kubevirt/virt-handler] 9685eb56ecd5: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 9685eb56ecd5: Pushed devel: digest: sha256:8d89679aa8716d967d4bcf1791f56e60068897d7e33f0b3d91313e9f706ac5d8 size: 741 The push refers to a repository [localhost:35107/kubevirt/virt-api] 810a6f76fa41: Preparing 5f1414e2d326: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 5f1414e2d326: Pushed 810a6f76fa41: Pushed devel: digest: sha256:75183df58892571f09684076f8d0baca05ed091a07f17dd2de86a9d1dc373292 size: 948 The push refers to a repository [localhost:35107/kubevirt/disks-images-provider] 2e0da09ca39e: Preparing 4fe8becbb60f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 2e0da09ca39e: Pushed 4fe8becbb60f: Pushed devel: digest: sha256:428f090d8d06e233f2edc2b6077dd369cae22d87e862812dc882926204603b3a size: 948 The push refers to a repository [localhost:35107/kubevirt/vm-killer] 7b031fa3032f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 7b031fa3032f: Pushed devel: digest: sha256:fd46d6703d94df7ab3b04ef08aa58c8850da619f4ba12cfd5db19149994579a3 size: 740 The push refers to a repository [localhost:35107/kubevirt/registry-disk-v1alpha] bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing bfd12fa374fa: Pushed 18ac8ad2aee9: Pushed 132d61a890c5: Pushed devel: digest: sha256:aa11e872f32769b6a6ed5fab4277ee0852f7bcfaa44e1c1a28cc21f3dfcd3ea5 size: 948 The push refers to a repository [localhost:35107/kubevirt/cirros-registry-disk-demo] 56d0e79c7554: Preparing bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing 18ac8ad2aee9: Waiting bfd12fa374fa: Waiting 132d61a890c5: Waiting bfd12fa374fa: Mounted from kubevirt/registry-disk-v1alpha 18ac8ad2aee9: Mounted from kubevirt/registry-disk-v1alpha 132d61a890c5: Mounted from kubevirt/registry-disk-v1alpha 56d0e79c7554: Pushed devel: digest: sha256:aea58cf02cbcc46523cccfb495a9f32c64fb41127db97c1d1fa62f646a0b1ac2 size: 1160 The push refers to a repository [localhost:35107/kubevirt/fedora-cloud-registry-disk-demo] ada5ea15f676: Preparing bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing 18ac8ad2aee9: Waiting 132d61a890c5: Waiting bfd12fa374fa: Mounted from kubevirt/cirros-registry-disk-demo 18ac8ad2aee9: Mounted from kubevirt/cirros-registry-disk-demo 132d61a890c5: Mounted from kubevirt/cirros-registry-disk-demo ada5ea15f676: Pushed devel: digest: sha256:3cdf3a8869341b4ee71f143694408db85f5fa8b052e4bec6984b37be9c186dd0 size: 1161 The push refers to a repository [localhost:35107/kubevirt/alpine-registry-disk-demo] 11d4dee517f7: Preparing bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing bfd12fa374fa: Mounted from kubevirt/fedora-cloud-registry-disk-demo 132d61a890c5: Mounted from kubevirt/fedora-cloud-registry-disk-demo 18ac8ad2aee9: Mounted from kubevirt/fedora-cloud-registry-disk-demo 11d4dee517f7: Pushed devel: digest: sha256:e9ab668c4068071f10b24b01b6bbf2be7107e9f93be09a8a40f23045b5c64bf6 size: 1160 The push refers to a repository [localhost:35107/kubevirt/subresource-access-test] 9fd5096e855d: Preparing 3c1237181850: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 3c1237181850: Pushed 9fd5096e855d: Pushed devel: digest: sha256:6ec8aadbf7025b38a01d09b1c2d2aa69a5307fede3f26c212be3efc0ce89612e size: 948 The push refers to a repository [localhost:35107/kubevirt/winrmcli] bf2bff760365: Preparing 589098974698: Preparing 6e22155a44ef: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test bf2bff760365: Pushed 6e22155a44ef: Pushed 589098974698: Pushed devel: digest: sha256:0e30bc859a2102ebd62595bd8ce2fe07ef43d2d2b3f9a07bedadfd8cb24a7286 size: 1165 The push refers to a repository [localhost:35107/kubevirt/example-hook-sidecar] 98ae55e3b6d0: Preparing 39bae602f753: Preparing 98ae55e3b6d0: Pushed 39bae602f753: Pushed devel: digest: sha256:c9123551b8e5cc64082109a66f2a0b5b45408abea5a0017acda604829139b10a size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-105-g9315d35 ++ KUBEVIRT_VERSION=v0.7.0-105-g9315d35 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:1e936a35d84102f96253002e463c6142c3422f4d6012ef4bdcc5e9cd6a63d359 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:35107/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-105-g9315d35 ++ KUBEVIRT_VERSION=v0.7.0-105-g9315d35 + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:1e936a35d84102f96253002e463c6142c3422f4d6012ef4bdcc5e9cd6a63d359 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:35107/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-release ]] + [[ k8s-1.10.3-release =~ .*-dev ]] + [[ k8s-1.10.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io/kubevirt.io:admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:edit created clusterrole.rbac.authorization.k8s.io/kubevirt.io:view created serviceaccount/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver-auth-delegator created rolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created role.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-controller created serviceaccount/kubevirt-controller created serviceaccount/kubevirt-privileged created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller-cluster-admin created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-privileged-cluster-admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:default created clusterrolebinding.rbac.authorization.k8s.io/kubevirt.io:default created service/virt-api created deployment.extensions/virt-api created deployment.extensions/virt-controller created daemonset.extensions/virt-handler created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstances.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancereplicasets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancepresets.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachines.kubevirt.io created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim/disk-alpine created persistentvolume/host-path-disk-alpine created persistentvolumeclaim/disk-custom created persistentvolume/host-path-disk-custom created daemonset.extensions/disks-images-provider created serviceaccount/kubevirt-testing created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-testing-cluster-admin created + [[ k8s-1.11.0 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-bcc6b587d-k7m6n 0/1 ContainerCreating 0 3s virt-api-bcc6b587d-rjspx 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-hcfft 0/1 ContainerCreating 0 3s virt-controller-67dcdd8464-zgqqp 0/1 ContainerCreating 0 3s virt-handler-6n5bx 0/1 ContainerCreating 0 3s virt-handler-75qw4 0/1 ContainerCreating 0 3s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-6dx2m 0/1 Pending 0 1s virt-api-bcc6b587d-k7m6n 0/1 ContainerCreating 0 4s virt-api-bcc6b587d-rjspx 0/1 ContainerCreating 0 4s virt-controller-67dcdd8464-hcfft 0/1 ContainerCreating 0 4s virt-controller-67dcdd8464-zgqqp 0/1 ContainerCreating 0 4s virt-handler-6n5bx 0/1 ContainerCreating 0 4s virt-handler-75qw4 0/1 ContainerCreating 0 4s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n false ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + true + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-92cs2 1/1 Running 0 14m coredns-78fcdf6894-n4q58 1/1 Running 0 14m disks-images-provider-6dx2m 1/1 Running 0 1m disks-images-provider-plvsg 1/1 Running 0 1m etcd-node01 1/1 Running 0 13m kube-apiserver-node01 1/1 Running 0 13m kube-controller-manager-node01 1/1 Running 0 13m kube-flannel-ds-gd88d 1/1 Running 0 14m kube-flannel-ds-nrsfd 1/1 Running 0 14m kube-proxy-2rzxl 1/1 Running 0 14m kube-proxy-5sqbk 1/1 Running 0 14m kube-scheduler-node01 1/1 Running 0 13m virt-api-bcc6b587d-k7m6n 1/1 Running 0 1m virt-api-bcc6b587d-rjspx 1/1 Running 0 1m virt-controller-67dcdd8464-hcfft 1/1 Running 0 1m virt-controller-67dcdd8464-zgqqp 1/1 Running 0 1m virt-handler-6n5bx 1/1 Running 0 1m virt-handler-75qw4 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n default --no-headers No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + [[ k8s-1.10.3-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532440890 Will run 144 of 144 specs ••••••••••• ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.008 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.048 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.016 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.015 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.012 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.020 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ •Service cluster-ip-vm successfully exposed for virtualmachineinstance testvmidthks ------------------------------ • [SLOW TEST:53.127 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service node-port-vm successfully exposed for virtualmachineinstance testvmidthks • [SLOW TEST:9.347 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:98 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:103 ------------------------------ Service cluster-ip-udp-vm successfully exposed for virtualmachineinstance testvmijdbbq • [SLOW TEST:54.668 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:147 Should expose a ClusterIP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:151 ------------------------------ Service node-port-udp-vm successfully exposed for virtualmachineinstance testvmijdbbq • [SLOW TEST:10.358 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:179 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:184 ------------------------------ Service cluster-ip-vmrs successfully exposed for vmirs replicaset2djn4 • [SLOW TEST:61.719 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:227 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:260 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:264 ------------------------------ Service cluster-ip-ovm successfully exposed for virtualmachine testvmih98hb VM testvmih98hb was scheduled to start • [SLOW TEST:48.255 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an Offline VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:292 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:336 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:337 ------------------------------ • [SLOW TEST:37.988 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ • [SLOW TEST:81.595 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:17.901 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:28.551 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • [SLOW TEST:18.944 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:19.030 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:20.914 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 should succeed to generate a VM JSON file using oc-process command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:150 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 should succeed to create a VM using oc-create command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:156 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 should succeed to launch a VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:161 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 with given VMI from the VM /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:163 should succeed to terminate the VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:166 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ • [SLOW TEST:5.069 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 with correct permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:51 should be allowed to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:52 ------------------------------ •••• ------------------------------ • [SLOW TEST:7.044 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:18.569 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ •• ------------------------------ • [SLOW TEST:5.545 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 ------------------------------ • ------------------------------ • [SLOW TEST:17.864 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ ••• ------------------------------ • [SLOW TEST:17.225 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:76 ------------------------------ • [SLOW TEST:17.507 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:82 ------------------------------ •••• ------------------------------ • [SLOW TEST:38.006 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:29.794 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:15.364 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:203 ------------------------------ • [SLOW TEST:18.899 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:233 ------------------------------ • [SLOW TEST:39.465 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:281 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:282 ------------------------------ • [SLOW TEST:26.697 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:304 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:305 ------------------------------ • [SLOW TEST:10.588 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:335 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:336 ------------------------------ • [SLOW TEST:85.790 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:366 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:405 ------------------------------ • [SLOW TEST:17.147 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with node tainted /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:458 the vmi with tolerations should be scheduled /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:480 ------------------------------ • ------------------------------ S [SKIPPING] [0.456 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:530 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 ------------------------------ S [SKIPPING] [0.147 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:530 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.216 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:591 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:603 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:599 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.254 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:591 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:640 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:599 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.167 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:591 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:684 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:599 ------------------------------ •••• ------------------------------ • [SLOW TEST:19.716 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:836 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:837 ------------------------------ • [SLOW TEST:37.056 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:868 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:870 ------------------------------ • [SLOW TEST:24.003 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:868 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:893 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:894 ------------------------------ Pod name: disks-images-provider-6dx2m Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-plvsg Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-k7m6n Pod phase: Running level=info timestamp=2018-07-24T14:16:53.584884Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/24 14:16:58 http: TLS handshake error from 10.244.0.1:34604: EOF level=info timestamp=2018-07-24T14:17:04.689133Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/24 14:17:08 http: TLS handshake error from 10.244.0.1:34664: EOF level=info timestamp=2018-07-24T14:17:12.603626Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/24 14:17:18 http: TLS handshake error from 10.244.0.1:34724: EOF level=info timestamp=2018-07-24T14:17:23.677943Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-24T14:17:23.708555Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/24 14:17:28 http: TLS handshake error from 10.244.0.1:34784: EOF level=info timestamp=2018-07-24T14:17:34.799051Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-24T14:17:36.701869Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-24T14:17:36.704702Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/24 14:17:38 http: TLS handshake error from 10.244.0.1:34846: EOF level=info timestamp=2018-07-24T14:17:42.776703Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/24 14:17:48 http: TLS handshake error from 10.244.0.1:34906: EOF Pod name: virt-api-bcc6b587d-rjspx Pod phase: Running 2018/07/24 14:16:12 http: TLS handshake error from 10.244.1.1:58320: EOF 2018/07/24 14:16:22 http: TLS handshake error from 10.244.1.1:58326: EOF level=info timestamp=2018-07-24T14:16:27.176235Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-24T14:16:27.199584Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-24T14:16:27.646411Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/24 14:16:32 http: TLS handshake error from 10.244.1.1:58332: EOF 2018/07/24 14:16:42 http: TLS handshake error from 10.244.1.1:58338: EOF 2018/07/24 14:16:52 http: TLS handshake error from 10.244.1.1:58344: EOF level=info timestamp=2018-07-24T14:16:57.451504Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/24 14:17:02 http: TLS handshake error from 10.244.1.1:58350: EOF 2018/07/24 14:17:12 http: TLS handshake error from 10.244.1.1:58356: EOF 2018/07/24 14:17:22 http: TLS handshake error from 10.244.1.1:58362: EOF level=info timestamp=2018-07-24T14:17:27.409216Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/24 14:17:32 http: TLS handshake error from 10.244.1.1:58368: EOF 2018/07/24 14:17:42 http: TLS handshake error from 10.244.1.1:58374: EOF Pod name: virt-controller-67dcdd8464-hcfft Pod phase: Running level=info timestamp=2018-07-24T14:16:12.774468Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisgkc9 kind= uid=1e8b7d22-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:16:12.777786Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisgkc9 kind= uid=1e8b7d22-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-24T14:16:13.068208Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmisgkc9\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmisgkc9, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1e8b7d22-8f4c-11e8-845a-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmisgkc9" level=info timestamp=2018-07-24T14:16:13.170600Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidz98d kind= uid=1ec8b646-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:16:13.170865Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidz98d kind= uid=1ec8b646-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-24T14:16:13.378839Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmidz98d\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmidz98d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1ec8b646-8f4c-11e8-845a-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmidz98d" level=info timestamp=2018-07-24T14:16:13.954493Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisvdlq kind= uid=1f40536b-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:16:13.955007Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmisvdlq kind= uid=1f40536b-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-24T14:16:34.013926Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:16:34.021621Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-24T14:16:34.604818Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi6lgf9\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi6lgf9" level=info timestamp=2018-07-24T14:17:10.969806Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:17:10.971878Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-24T14:17:34.988130Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigv7sz kind= uid=4f84e675-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:17:34.997819Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmigv7sz kind= uid=4f84e675-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-r5jpw Pod phase: Running level=info timestamp=2018-07-24T14:05:45.473567Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-6bd9z Pod phase: Running level=info timestamp=2018-07-24T14:16:50.319357Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:16:50.319516Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-24T14:16:50.319712Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:16:51.031078Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-24T14:16:51.032219Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:16:51.032570Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-24T14:16:51.032856Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:17:09.439706Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-24T14:17:09.441428Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind= uid=2b2eb7f7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:17:09.482436Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-24T14:17:09.482795Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi6lgf9 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:17:50.555805Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmigv7sz kind= uid=4f84e675-8f4c-11e8-845a-525500d15501 msg="Processing vmi update" level=error timestamp=2018-07-24T14:17:50.578040Z pos=vm.go:397 component=virt-handler namespace=kubevirt-test-default name=testvmigv7sz kind= uid=4f84e675-8f4c-11e8-845a-525500d15501 reason="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-24T14:17:50.672875Z pos=vm.go:251 component=virt-handler reason="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmigv7sz" level=info timestamp=2018-07-24T14:17:50.673437Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmigv7sz kind= uid=4f84e675-8f4c-11e8-845a-525500d15501 msg="Processing vmi update" Pod name: virt-handler-m2fl9 Pod phase: Running level=info timestamp=2018-07-24T14:17:34.461612Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:17:34.462193Z pos=vm.go:342 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-24T14:17:34.462292Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-24T14:17:34.462732Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:17:34.471977Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-24T14:17:34.473100Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-24T14:17:34.506545Z pos=vm.go:342 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-24T14:17:34.506878Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-24T14:17:34.564675Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:17:34.565668Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-24T14:17:34.565946Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:17:34.691823Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-24T14:17:34.692178Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:17:34.751346Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-24T14:17:34.751629Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmigv7sz-l6p4s Pod phase: Running level=info timestamp=2018-07-24T14:17:38.381955Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-24T14:17:38.382285Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-24T14:17:38.384265Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-24T14:17:48.401021Z pos=libvirt.go:271 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-24T14:17:48.481457Z pos=virt-launcher.go:143 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmigv7sz" level=info timestamp=2018-07-24T14:17:48.484822Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-24T14:17:48.485592Z pos=virt-launcher.go:60 component=virt-launcher msg="Marked as ready" level=error timestamp=2018-07-24T14:17:50.573490Z pos=manager.go:159 component=virt-launcher namespace=kubevirt-test-default name=testvmigv7sz kind= uid=4f84e675-8f4c-11e8-845a-525500d15501 reason="virError(Code=0, Domain=0, Message='Missing error')" msg="Getting the domain failed." level=error timestamp=2018-07-24T14:17:50.573717Z pos=server.go:68 component=virt-launcher namespace=kubevirt-test-default name=testvmigv7sz kind= uid=4f84e675-8f4c-11e8-845a-525500d15501 reason="virError(Code=0, Domain=0, Message='Missing error')" msg="Failed to sync vmi" level=error timestamp=2018-07-24T14:17:50.683920Z pos=common.go:126 component=virt-launcher msg="updated MAC for interface: eth0 - 0a:58:0a:95:62:91" level=info timestamp=2018-07-24T14:17:50.686337Z pos=converter.go:739 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-24T14:17:50.686418Z pos=converter.go:740 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-24T14:17:50.689854Z pos=dhcp.go:62 component=virt-launcher msg="Starting SingleClientDHCPServer" level=info timestamp=2018-07-24T14:17:50.875558Z pos=manager.go:157 component=virt-launcher namespace=kubevirt-test-default name=testvmigv7sz kind= uid=4f84e675-8f4c-11e8-845a-525500d15501 msg="Domain defined." level=info timestamp=2018-07-24T14:17:50.876537Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" • Failure [104.820 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:945 should be in Failed phase [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:946 Unexpected Warning event received. Expected : Warning not to equal : Warning /root/go/src/kubevirt.io/kubevirt/tests/utils.go:245 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-07-24T14:17:34.992732Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmigv7sz-l6p4s" level=info timestamp=2018-07-24T14:17:51.694034Z pos=utils.go:243 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmigv7sz-l6p4s" level=error timestamp=2018-07-24T14:17:51.782958Z pos=utils.go:241 component=tests reason="unexpected warning event received" msg="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" STEP: Killing the VirtualMachineInstance level=info timestamp=2018-07-24T14:19:15.023947Z pos=utils.go:254 component=tests msg="Created virtual machine pod virt-launcher-testvmigv7sz-l6p4s" level=error timestamp=2018-07-24T14:19:15.024220Z pos=utils.go:252 component=tests reason="unexpected warning event received" msg="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" level=info timestamp=2018-07-24T14:19:15.024512Z pos=utils.go:254 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmigv7sz-l6p4s" level=info timestamp=2018-07-24T14:19:15.024905Z pos=utils.go:254 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-07-24T14:19:15.025129Z pos=utils.go:254 component=tests msg="VirtualMachineInstance started." level=error timestamp=2018-07-24T14:19:19.204297Z pos=utils.go:252 component=tests reason="unexpected warning event received" msg="The VirtualMachineInstance crashed." STEP: Checking that the VirtualMachineInstance has 'Failed' phase • [SLOW TEST:25.935 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:945 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:973 ------------------------------ • [SLOW TEST:96.059 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••• ------------------------------ • [SLOW TEST:5.684 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •••• ------------------------------ • [SLOW TEST:5.692 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 with a service matching the vmi exposed /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:283 should fail to reach the vmi if an invalid servicename is used /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:314 ------------------------------ • ------------------------------ • [SLOW TEST:39.591 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:379 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:380 ------------------------------ • Pod name: disks-images-provider-6dx2m Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-plvsg Pod phase: Running copy all images to host mount directory Pod name: virt-api-bcc6b587d-k7m6n Pod phase: Running 2018/07/24 14:23:28 http: TLS handshake error from 10.244.0.1:37070: EOF 2018/07/24 14:23:38 http: TLS handshake error from 10.244.0.1:37130: EOF 2018/07/24 14:23:48 http: TLS handshake error from 10.244.0.1:37190: EOF 2018/07/24 14:23:58 http: TLS handshake error from 10.244.0.1:37250: EOF 2018/07/24 14:24:08 http: TLS handshake error from 10.244.0.1:37310: EOF 2018/07/24 14:24:18 http: TLS handshake error from 10.244.0.1:37370: EOF 2018/07/24 14:24:28 http: TLS handshake error from 10.244.0.1:37430: EOF 2018/07/24 14:24:38 http: TLS handshake error from 10.244.0.1:37490: EOF 2018/07/24 14:24:48 http: TLS handshake error from 10.244.0.1:37550: EOF 2018/07/24 14:24:58 http: TLS handshake error from 10.244.0.1:37610: EOF 2018/07/24 14:25:08 http: TLS handshake error from 10.244.0.1:37670: EOF 2018/07/24 14:25:18 http: TLS handshake error from 10.244.0.1:37730: EOF 2018/07/24 14:25:28 http: TLS handshake error from 10.244.0.1:37790: EOF 2018/07/24 14:25:38 http: TLS handshake error from 10.244.0.1:37850: EOF 2018/07/24 14:25:48 http: TLS handshake error from 10.244.0.1:37910: EOF Pod name: virt-api-bcc6b587d-rjspx Pod phase: Running 2018/07/24 14:23:37 http: TLS handshake error from 10.244.1.1:33132: EOF 2018/07/24 14:23:47 http: TLS handshake error from 10.244.1.1:33136: EOF 2018/07/24 14:23:57 http: TLS handshake error from 10.244.1.1:33142: EOF 2018/07/24 14:24:07 http: TLS handshake error from 10.244.1.1:33148: EOF 2018/07/24 14:24:17 http: TLS handshake error from 10.244.1.1:33154: EOF 2018/07/24 14:24:27 http: TLS handshake error from 10.244.1.1:33160: EOF 2018/07/24 14:24:37 http: TLS handshake error from 10.244.1.1:33166: EOF 2018/07/24 14:24:47 http: TLS handshake error from 10.244.1.1:33172: EOF 2018/07/24 14:24:57 http: TLS handshake error from 10.244.1.1:33178: EOF 2018/07/24 14:25:07 http: TLS handshake error from 10.244.1.1:33184: EOF 2018/07/24 14:25:17 http: TLS handshake error from 10.244.1.1:33190: EOF 2018/07/24 14:25:27 http: TLS handshake error from 10.244.1.1:33196: EOF 2018/07/24 14:25:37 http: TLS handshake error from 10.244.1.1:33202: EOF 2018/07/24 14:25:47 http: TLS handshake error from 10.244.1.1:33208: EOF Pod name: virt-controller-67dcdd8464-hcfft Pod phase: Running level=info timestamp=2018-07-24T14:19:45.720707Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmipkxl5 kind= uid=9d755503-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:19:45.725535Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmipkxl5 kind= uid=9d755503-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-24T14:19:45.771557Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi2wvgj kind= uid=9d7ce78a-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:19:45.772054Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi2wvgj kind= uid=9d7ce78a-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-24T14:19:45.804522Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:19:45.804725Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-24T14:19:45.843157Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvminbd2z kind= uid=9d88a6b2-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:19:45.843500Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvminbd2z kind= uid=9d88a6b2-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-24T14:19:46.027913Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmipg7dl\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmipg7dl" level=info timestamp=2018-07-24T14:19:46.136684Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminbd2z\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminbd2z" level=info timestamp=2018-07-24T14:19:46.740800Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminbd2z\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminbd2z" level=info timestamp=2018-07-24T14:21:58.497680Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi84jb2 kind= uid=ec94701e-8f4c-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:21:58.502149Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi84jb2 kind= uid=ec94701e-8f4c-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-24T14:22:39.785087Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmicj8rc kind= uid=0534ad96-8f4d-11e8-845a-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-24T14:22:39.785811Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmicj8rc kind= uid=0534ad96-8f4d-11e8-845a-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-67dcdd8464-r5jpw Pod phase: Running level=info timestamp=2018-07-24T14:23:32.847711Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-6bd9z Pod phase: Running level=info timestamp=2018-07-24T14:23:32.845637Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmipkxl5 kind=VirtualMachineInstance uid=9d755503-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:23:32.846307Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmicj8rc kind=VirtualMachineInstance uid=0534ad96-8f4d-11e8-845a-525500d15501 msg="No update processing required" level=info timestamp=2018-07-24T14:23:33.091835Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi84jb2 kind=VirtualMachineInstance uid=ec94701e-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:23:33.092043Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmi2wvgj kind= uid=9d7ce78a-8f4c-11e8-845a-525500d15501 msg="No update processing required" level=info timestamp=2018-07-24T14:23:33.092101Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi2wvgj kind= uid=9d7ce78a-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:23:33.092274Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmipkxl5 kind= uid=9d755503-8f4c-11e8-845a-525500d15501 msg="No update processing required" level=info timestamp=2018-07-24T14:23:33.092354Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmipkxl5 kind= uid=9d755503-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:23:33.092703Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminbd2z kind=VirtualMachineInstance uid=9d88a6b2-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:23:33.093988Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmicj8rc kind=VirtualMachineInstance uid=0534ad96-8f4d-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:23:33.095316Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvminbd2z kind= uid=9d88a6b2-8f4c-11e8-845a-525500d15501 msg="No update processing required" level=info timestamp=2018-07-24T14:23:33.095405Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminbd2z kind= uid=9d88a6b2-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:23:33.095496Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmi84jb2 kind= uid=ec94701e-8f4c-11e8-845a-525500d15501 msg="No update processing required" level=info timestamp=2018-07-24T14:23:33.095535Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi84jb2 kind= uid=ec94701e-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:23:33.095591Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmicj8rc kind= uid=0534ad96-8f4d-11e8-845a-525500d15501 msg="No update processing required" level=info timestamp=2018-07-24T14:23:33.095628Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmicj8rc kind= uid=0534ad96-8f4d-11e8-845a-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-m2fl9 Pod phase: Running level=info timestamp=2018-07-24T14:17:34.691823Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-24T14:17:34.692178Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind= uid=4138c4e7-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:17:34.751346Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-24T14:17:34.751629Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmir9nwh kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:20:02.071843Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-24T14:20:02.841459Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-24T14:20:02.841820Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmipg7dl kind=Domain uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Domain is in state Paused reason StartingUp" level=info timestamp=2018-07-24T14:20:03.413626Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-24T14:20:03.415863Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmipg7dl kind=Domain uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-24T14:20:03.492124Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-24T14:20:03.550967Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:20:03.551788Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="No update processing required" level=info timestamp=2018-07-24T14:20:03.572847Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-24T14:20:03.576415Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-24T14:20:03.590620Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat4k4jw Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.85 1500 -i 1 -w 1 Hello World! + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' succeeded + echo succeeded + exit 0 Pod name: netcat5ww8v Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.85 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcat7qqfz Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.85 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcat8vl4j Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. failed + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 Pod name: netcatkj7j4 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.85 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatl6jcc Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatzxfvm Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi2wvgj-vgjdj Pod phase: Failed level=info timestamp=2018-07-24T14:20:02.721041Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-24T14:20:02.732460Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:02.755712Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID a6bdcf57-6d9b-431a-b39e-7dd643522da1" level=info timestamp=2018-07-24T14:20:02.756780Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-24T14:20:03.263962Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-24T14:20:03.370127Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-24T14:20:03.370864Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi2wvgj kind= uid=9d7ce78a-8f4c-11e8-845a-525500d15501 msg="Domain started." level=info timestamp=2018-07-24T14:20:03.374924Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:03.375059Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-24T14:20:03.380247Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi2wvgj kind= uid=9d7ce78a-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:20:03.423057Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-24T14:20:03.427992Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:03.429442Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi2wvgj kind= uid=9d7ce78a-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:20:03.439859Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi2wvgj kind= uid=9d7ce78a-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:20:03.766931Z pos=monitor.go:222 component=virt-launcher msg="Found PID for a6bdcf57-6d9b-431a-b39e-7dd643522da1: 191" Pod name: virt-launcher-testvmi84jb2-s79kb Pod phase: Failed level=info timestamp=2018-07-24T14:22:14.160912Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-24T14:22:14.839154Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-24T14:22:14.846939Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:22:15.048277Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 3f282beb-1cb5-45fa-a952-1e6ea22b761c" level=info timestamp=2018-07-24T14:22:15.049322Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-24T14:22:15.583091Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-24T14:22:15.615716Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-24T14:22:15.622824Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:22:15.634921Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-24T14:22:15.656251Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi84jb2 kind= uid=ec94701e-8f4c-11e8-845a-525500d15501 msg="Domain started." level=info timestamp=2018-07-24T14:22:15.664379Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi84jb2 kind= uid=ec94701e-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:22:15.667852Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-24T14:22:15.675668Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:22:15.762892Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi84jb2 kind= uid=ec94701e-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:22:16.054329Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 3f282beb-1cb5-45fa-a952-1e6ea22b761c: 187" Pod name: virt-launcher-testvmicj8rc-9k48d Pod phase: Running level=info timestamp=2018-07-24T14:24:27.524472Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-24T14:24:27.524628Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-24T14:24:27.525884Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-24T14:24:37.533471Z pos=libvirt.go:271 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-24T14:24:37.599492Z pos=virt-launcher.go:143 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmicj8rc" level=info timestamp=2018-07-24T14:24:37.602080Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-24T14:24:37.602458Z pos=virt-launcher.go:60 component=virt-launcher msg="Marked as ready" Pod name: virt-launcher-testvminbd2z-nrmxk Pod phase: Failed level=info timestamp=2018-07-24T14:20:02.076393Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-24T14:20:02.946099Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-24T14:20:02.974148Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:03.014786Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 39751d29-7a2c-474f-805a-54b704e3c963" level=info timestamp=2018-07-24T14:20:03.015172Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-24T14:20:03.556809Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-24T14:20:03.633972Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-24T14:20:03.636076Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:03.636254Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-24T14:20:03.665944Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-24T14:20:03.671507Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvminbd2z kind= uid=9d88a6b2-8f4c-11e8-845a-525500d15501 msg="Domain started." level=info timestamp=2018-07-24T14:20:03.693668Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:03.698909Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvminbd2z kind= uid=9d88a6b2-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:20:03.737433Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvminbd2z kind= uid=9d88a6b2-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:20:04.023114Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 39751d29-7a2c-474f-805a-54b704e3c963: 191" Pod name: virt-launcher-testvmipg7dl-c2rnx Pod phase: Running level=info timestamp=2018-07-24T14:20:02.279985Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-24T14:20:02.823068Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-24T14:20:02.843851Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:02.880965Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 00fd379e-7cbd-42d8-aa2a-3cb152a434b3" level=info timestamp=2018-07-24T14:20:02.881443Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-24T14:20:03.380121Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-24T14:20:03.408553Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-24T14:20:03.414997Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:03.427484Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-24T14:20:03.450530Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Domain started." level=info timestamp=2018-07-24T14:20:03.456043Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:20:03.458383Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-24T14:20:03.548651Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:03.587955Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipg7dl kind= uid=9d840159-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:20:03.886932Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 00fd379e-7cbd-42d8-aa2a-3cb152a434b3: 190" Pod name: virt-launcher-testvmipkxl5-s4dtg Pod phase: Failed level=info timestamp=2018-07-24T14:20:00.008350Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-24T14:20:00.560479Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-24T14:20:00.564511Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 742ca460-d5d9-435c-aeaa-2cc034eb127b" level=info timestamp=2018-07-24T14:20:00.564742Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-24T14:20:00.569822Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:01.172766Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-24T14:20:01.192126Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-24T14:20:01.194430Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:01.232522Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-24T14:20:01.257726Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmipkxl5 kind= uid=9d755503-8f4c-11e8-845a-525500d15501 msg="Domain started." level=info timestamp=2018-07-24T14:20:01.262953Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipkxl5 kind= uid=9d755503-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:20:01.264974Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-24T14:20:01.267259Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-24T14:20:01.360854Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmipkxl5 kind= uid=9d755503-8f4c-11e8-845a-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-24T14:20:01.624075Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 742ca460-d5d9-435c-aeaa-2cc034eb127b: 185" ------------------------------ • Failure [200.789 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:413 should configure custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:414 Expected error: : 180000000000 expect: timer expired after 180 seconds not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:94 ------------------------------ STEP: checking eth0 MAC address level=info timestamp=2018-07-24T14:22:39.842624Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmicj8rc-9k48d" level=info timestamp=2018-07-24T14:22:56.340196Z pos=utils.go:243 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmicj8rc-9k48d" level=info timestamp=2018-07-24T14:22:57.372800Z pos=utils.go:243 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-07-24T14:22:57.405349Z pos=utils.go:243 component=tests msg="VirtualMachineInstance started." level=info timestamp=2018-07-24T14:25:57.642413Z pos=utils.go:1249 component=tests namespace=kubevirt-test-default name=testvmicj8rc kind=VirtualMachineInstance uid=0534ad96-8f4d-11e8-845a-525500d15501 msg="Login: [{2 \r\n\r\n\r\nISOLINUX 6.04 6.04-pre1 Copyright (C) 1994-2015 H. Peter Anvin et al\r\nboot: \u001b[?7h\r\n []}]" panic: test timed out after 1h30m0s goroutine 3080 [running]: testing.(*M).startAlarm.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1240 +0xfc created by time.goFunc /gimme/.gimme/versions/go1.10.linux.amd64/src/time/sleep.go:172 +0x44 goroutine 1 [chan receive, 90 minutes]: testing.(*T).Run(0xc4206330e0, 0x13857f2, 0x9, 0x14171b0, 0x4801e6) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:825 +0x301 testing.runTests.func1(0xc420632ff0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1063 +0x64 testing.tRunner(0xc420632ff0, 0xc4205efdf8) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 testing.runTests(0xc4208db1a0, 0x1d0ca20, 0x1, 0x1, 0x412009) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1061 +0x2c4 testing.(*M).Run(0xc4209b8e80, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:978 +0x171 main.main() _testmain.go:44 +0x151 goroutine 5 [chan receive, 2 minutes]: kubevirt.io/kubevirt/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x1d381c0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:879 +0x8b created by kubevirt.io/kubevirt/vendor/github.com/golang/glog.init.0 /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:410 +0x203 goroutine 6 [syscall, 90 minutes]: os/signal.signal_recv(0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/sigqueue.go:139 +0xa6 os/signal.loop() /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:22 +0x22 created by os/signal.init.0 /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:28 +0x41 goroutine 11 [chan send, 66 minutes]: kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).Send(0xc4206322d0, 0x137edba, 0x1, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1070 +0x8f kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).ExpectBatch(0xc4206322d0, 0xc4208274a0, 0x5, 0x5, 0x29e8d60800, 0x0, 0x0, 0x14b8640, 0xc4206322d0, 0xc420a52c00) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:574 +0x763 kubevirt.io/kubevirt/tests.LoggedInAlpineExpecter(0xc420b4d900, 0x14a9e40, 0x1d56860, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1247 +0x31d kubevirt.io/kubevirt/tests_test.glob..func18.3(0xc420b4d900, 0x1417180, 0x1d56860) /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:93 +0x183 kubevirt.io/kubevirt/tests_test.glob..func18.15.1() /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:433 +0x1e0 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc42071cc00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:109 +0x9c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc42071cc00, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:63 +0x13e kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc420370fc0, 0x149c040, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:25 +0x7f kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc4201531e0, 0x0, 0x149c040, 0xc4201e50b0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:176 +0x5a6 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc4201531e0, 0x149c040, 0xc4201e50b0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:127 +0xe3 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc420351040, 0xc4201531e0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:198 +0x10d kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc420351040, 0x1417e01) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:168 +0x32c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc420351040, 0xb) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:64 +0xdc kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc4200ea9b0, 0x7f6d961c37c8, 0xc4206330e0, 0x1387dd5, 0xb, 0xc4208db1e0, 0x2, 0x2, 0x14b8680, 0xc4201e50b0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x27c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x149d0a0, 0xc4206330e0, 0x1387dd5, 0xb, 0xc4208db1c0, 0x2, 0x2, 0x2) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:218 +0x258 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x149d0a0, 0xc4206330e0, 0x1387dd5, 0xb, 0xc420440300, 0x1, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:206 +0xab kubevirt.io/kubevirt/tests_test.TestTests(0xc4206330e0) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc4206330e0, 0x14171b0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:824 +0x2e0 goroutine 12 [chan receive, 90 minutes]: kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).registerForInterrupts(0xc420351040) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:220 +0xc0 created by kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:59 +0x60 goroutine 50 [select, 90 minutes, locked to thread]: runtime.gopark(0x1419320, 0x0, 0x138234f, 0x6, 0x18, 0x1) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/proc.go:291 +0x11a runtime.selectgo(0xc4207e9750, 0xc420364060) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/select.go:392 +0xe50 runtime.ensureSigM.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/signal_unix.go:549 +0x1f4 runtime.goexit() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/asm_amd64.s:2361 +0x1 goroutine 66 [IO wait, 66 minutes]: internal/poll.runtime_pollWait(0x7f6d961a9f00, 0x72, 0xc420b87850) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/netpoll.go:173 +0x57 internal/poll.(*pollDesc).wait(0xc4208caa98, 0x72, 0xffffffffffffff00, 0x149e260, 0x1c237d0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:85 +0x9b internal/poll.(*pollDesc).waitRead(0xc4208caa98, 0xc420bb8000, 0x8000, 0x8000) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:90 +0x3d internal/poll.(*FD).Read(0xc4208caa80, 0xc420bb8000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_unix.go:157 +0x17d net.(*netFD).Read(0xc4208caa80, 0xc420bb8000, 0x8000, 0x8000, 0xc420b87aa8, 0x42694c, 0xc420b87a90) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/fd_unix.go:202 +0x4f net.(*conn).Read(0xc4209b65c0, 0xc420bb8000, 0x8000, 0x8000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/net.go:176 +0x6a crypto/tls.(*block).readFromUntil(0xc42060d6e0, 0x7f6d961c3898, 0xc4209b65c0, 0x5, 0xc4209b65c0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:493 +0x96 crypto/tls.(*Conn).readRecord(0xc420132700, 0x1419417, 0xc420132820, 0x20) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:595 +0xe0 crypto/tls.(*Conn).Read(0xc420132700, 0xc4205c1000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:1156 +0x100 bufio.(*Reader).Read(0xc42041cde0, 0xc42095c9d8, 0x9, 0x9, 0xc420b7cf58, 0xc4209969a0, 0xc420b87d10) /gimme/.gimme/versions/go1.10.linux.amd64/src/bufio/bufio.go:216 +0x238 io.ReadAtLeast(0x149ae60, 0xc42041cde0, 0xc42095c9d8, 0x9, 0x9, 0x9, 0xc420b87ce0, 0xc420b87ce0, 0x406614) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:309 +0x86 io.ReadFull(0x149ae60, 0xc42041cde0, 0xc42095c9d8, 0x9, 0x9, 0xc420b7cf00, 0xc420b87d10, 0xc400001001) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:327 +0x58 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.readFrameHeader(0xc42095c9d8, 0x9, 0x9, 0x149ae60, 0xc42041cde0, 0x0, 0xc400000000, 0x7ef8ed, 0xc420b87fb0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:237 +0x7b kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc42095c9a0, 0xc420900000, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:492 +0xa4 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc420b87fb0, 0x14180c8, 0xc4207ee7b0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1428 +0x8e kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc4200f3d40) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1354 +0x76 created by kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Transport).newClientConn /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:579 +0x651 goroutine 3071 [select, 66 minutes]: io.(*pipe).Read(0xc420827450, 0xc420b1a000, 0x2000, 0x2000, 0x118b6e0, 0xc420469501, 0xc420b1a000) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/pipe.go:50 +0x115 io.(*PipeReader).Read(0xc420dd8ee0, 0xc420b1a000, 0x2000, 0x2000, 0x2000, 0x2000, 0xc4200a8040) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/pipe.go:127 +0x4c kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).waitForSession.func2(0x149b240, 0xc420dd8ee0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1023 +0xdb created by kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).waitForSession /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1042 +0x154 goroutine 581 [chan send, 84 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc4208880f0) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 1518 [chan send, 81 minutes]: kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1.1(0x14d6200, 0xc4207f6240, 0xc4209b6010, 0xc420ec3140, 0xc4208e68b0, 0xc4208e68c0) /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:81 +0x138 created by kubevirt.io/kubevirt/tests_test.glob..func23.1.2.1 /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:73 +0x386 goroutine 2606 [chan send, 73 minutes]: kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc42055e0c0) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:114 +0x114 created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 3070 [select, 66 minutes]: io.(*pipe).Write(0xc420827400, 0xc420b0d148, 0x1, 0x8, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/pipe.go:87 +0x1e3 io.(*PipeWriter).Write(0xc420dd8ed8, 0xc420b0d148, 0x1, 0x8, 0x1, 0x8, 0x4060c0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/pipe.go:153 +0x4c kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).waitForSession.func1(0xc4203e9240, 0xc4208e9020, 0xc4206322d0, 0x149fb20, 0xc420dd8ed8) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1012 +0x187 created by kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).waitForSession /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1001 +0xc3 goroutine 3069 [semacquire, 66 minutes]: sync.runtime_Semacquire(0xc4203e924c) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/sema.go:56 +0x39 sync.(*WaitGroup).Wait(0xc4203e9240) /gimme/.gimme/versions/go1.10.linux.amd64/src/sync/waitgroup.go:129 +0x72 kubevirt.io/kubevirt/vendor/github.com/google/goexpect.(*GExpect).waitForSession(0xc4206322d0, 0xc420a52c00, 0xc4206af500, 0x149fb20, 0xc420dd8ed8, 0x149b240, 0xc420dd8ee0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:1049 +0x19d created by kubevirt.io/kubevirt/vendor/github.com/google/goexpect.SpawnGeneric /root/go/src/kubevirt.io/kubevirt/vendor/github.com/google/goexpect/expect.go:808 +0x299 make: *** [functest] Error 2 + make cluster-down ./cluster/down.sh