+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release + [[ k8s-1.10.4-release =~ openshift-.* ]] + [[ k8s-1.10.4-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.4 + KUBEVIRT_PROVIDER=k8s-1.10.4 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/27 10:29:18 Waiting for host: 192.168.66.101:22 2018/07/27 10:29:21 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/27 10:29:33 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.4 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 26.503500 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:2bd5e8dc98e0ccac0265f14a437c8f1ce2f62b0546416bb974c137aa3e23d9ad + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/27 10:30:18 Waiting for host: 192.168.66.102:22 2018/07/27 10:30:21 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/27 10:30:33 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39611920 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 35s v1.10.4 node02 NotReady 9s v1.10.4 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n 'node02 NotReady 9s v1.10.4' ']' + echo 'Waiting for all nodes to become ready ...' Waiting for all nodes to become ready ... + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 36s v1.10.4 node02 NotReady 10s v1.10.4 + kubectl_rc=0 + sleep 10 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 46s v1.10.4 node02 Ready 20s v1.10.4 + make cluster-sync ./cluster/build.sh Building ... sha256:559a45ac63f40982ccce3a1b80cb62788566f2032c847ad9c45ee993eb9c48d4 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:559a45ac63f40982ccce3a1b80cb62788566f2032c847ad9c45ee993eb9c48d4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.37 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> a776f834c795 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 714b6ef15e78 Step 5/8 : USER 1001 ---> Using cache ---> cadd485aa8f4 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> f9820b75bfac Removing intermediate container 880d971350c1 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 85f665233728 ---> debab675302a Removing intermediate container 85f665233728 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release0" '' "virt-controller" '' ---> Running in f33d0d690074 ---> 0de9482c8e9f Removing intermediate container f33d0d690074 Successfully built 0de9482c8e9f Sending build context to Docker daemon 43.3 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 795ad92a5172 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 49e8a67155c8 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> 92e322e768a9 Removing intermediate container 5527d6fa3a87 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 403044063a90 Removing intermediate container 9b6a288f4505 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in d8b9101588a8  ---> 99c89caf00ec Removing intermediate container d8b9101588a8 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 36f49e39a6aa  ---> a1f077a79a43 Removing intermediate container 36f49e39a6aa Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 1909ff63f6b4 Removing intermediate container 7e5c7da5d37c Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in f5e4a3d3761b ---> 76b7fd92e7e3 Removing intermediate container f5e4a3d3761b Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release0" '' "virt-launcher" '' ---> Running in 64cc2085abb4 ---> 501de2e7ab90 Removing intermediate container 64cc2085abb4 Successfully built 501de2e7ab90 Sending build context to Docker daemon 41.67 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> b1ece43d6336 Removing intermediate container a3b2e4611ed3 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 6f63f74eb687 ---> c59ad65cd836 Removing intermediate container 6f63f74eb687 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release0" '' "virt-handler" '' ---> Running in 77003c119a07 ---> f470236fff05 Removing intermediate container 77003c119a07 Successfully built f470236fff05 Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 9bbbc9ec8ccc Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 6ff95ae380a5 Step 5/8 : USER 1001 ---> Using cache ---> 0026fc44bed8 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> b200fd783ce2 Removing intermediate container ae22593c97e1 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 4a217138955a ---> 3b0d9e3b1d19 Removing intermediate container 4a217138955a Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release0" '' "virt-api" '' ---> Running in 7f0aa86cbdac ---> e9e611514e01 Removing intermediate container 7f0aa86cbdac Successfully built e9e611514e01 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/7 : ENV container docker ---> Using cache ---> d7ee9dd5410a Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 0b64ac188f84 Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> c9569040fd52 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> b0887fd36d1c Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.4-release0" '' ---> Using cache ---> 5e827c2df99f Successfully built 5e827c2df99f Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/5 : ENV container docker ---> Using cache ---> d7ee9dd5410a Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> e96d3e3c109a Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release0" '' "vm-killer" '' ---> Using cache ---> b82659934a82 Successfully built b82659934a82 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> b7f20b0c4c41 Step 3/7 : ENV container docker ---> Using cache ---> 83fc28f38982 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 604b0b292d97 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 78792d6f56cd Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 7f24cc15e083 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release0" '' "registry-disk-v1alpha" '' ---> Using cache ---> 9f185ea05af0 Successfully built 9f185ea05af0 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32974/kubevirt/registry-disk-v1alpha:devel ---> 9f185ea05af0 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 144098c857f6 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 470c8c941f3b Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.4-release0" '' ---> Using cache ---> 5c3573aee555 Successfully built 5c3573aee555 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32974/kubevirt/registry-disk-v1alpha:devel ---> 9f185ea05af0 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 163d18ada1f5 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> df03150f97f7 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.4-release0" '' ---> Using cache ---> d2698a39e323 Successfully built d2698a39e323 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32974/kubevirt/registry-disk-v1alpha:devel ---> 9f185ea05af0 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 163d18ada1f5 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> d98139a655a4 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.4-release0" '' ---> Using cache ---> 9e041444a39a Successfully built 9e041444a39a Sending build context to Docker daemon 35.59 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 5704030d2070 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 624a72b3ef33 Step 5/8 : USER 1001 ---> Using cache ---> 74157fb56326 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> d2ddd6c4a353 Removing intermediate container 0db1c12a705b Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 112357f962aa ---> a42c98994e79 Removing intermediate container 112357f962aa Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release0" '' "subresource-access-test" '' ---> Running in 6b5d67ffa931 ---> 870fa393d21b Removing intermediate container 6b5d67ffa931 Successfully built 870fa393d21b Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d3c656a2b485 Step 3/9 : ENV container docker ---> Using cache ---> d7ee9dd5410a Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> e4ae555b2a96 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 4805ef8280c3 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 7c1f17e56984 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> c388427c6a76 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 5da240e34c8d Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.4-release0" '' "winrmcli" '' ---> Using cache ---> fc4af18e41a0 Successfully built fc4af18e41a0 Sending build context to Docker daemon 36.79 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 58c7014d7bc4 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 8dc2dc8f410d Removing intermediate container c676d4b1f7c8 Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 6c920f340b01 ---> d0955f9afbff Removing intermediate container 6c920f340b01 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.4-release0" '' ---> Running in 20b21abe8026 ---> c5cdd0f395db Removing intermediate container 20b21abe8026 Successfully built c5cdd0f395db hack/build-docker.sh push The push refers to a repository [localhost:32974/kubevirt/virt-controller] 84de7e76368c: Preparing efce1557ba86: Preparing 891e1e4ef82a: Preparing 84de7e76368c: Pushed efce1557ba86: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:1baea68f27973c13329cb326aa367f747bb9e0e1d7891a3f7652546074d9c0da size: 949 The push refers to a repository [localhost:32974/kubevirt/virt-launcher] 993c226c41ff: Preparing 9230a896aae4: Preparing ce45d34558de: Preparing e73dd349e4e6: Preparing b5bb2e9ea240: Preparing 779823b58976: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing 779823b58976: Waiting da38cf808aa5: Waiting b83399358a92: Waiting 186d8b3e4fd8: Waiting fa6154170bf5: Waiting 5eefb9960a36: Waiting 891e1e4ef82a: Waiting e73dd349e4e6: Pushed 9230a896aae4: Pushed 993c226c41ff: Pushed b83399358a92: Pushed da38cf808aa5: Pushed ce45d34558de: Pushed 186d8b3e4fd8: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller fa6154170bf5: Pushed b5bb2e9ea240: Pushed 779823b58976: Pushed 5eefb9960a36: Pushed devel: digest: sha256:43d2af022680fb0f399eea6b48d2f08f6a5bf8e74ed006d737a79ad86abe3f9f size: 2828 The push refers to a repository [localhost:32974/kubevirt/virt-handler] b17a4aab9006: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher b17a4aab9006: Pushed devel: digest: sha256:5e010c3533ae2f6a6600714bf322e251a259c2a5372b4091c09b42a5ab1fb296 size: 741 The push refers to a repository [localhost:32974/kubevirt/virt-api] 1f2527d8f10d: Preparing 1cd776a5872d: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 1cd776a5872d: Pushed 1f2527d8f10d: Pushed devel: digest: sha256:ec38a08db5db130595abd383ad1bc306d74133daa49287fcf768dc53fd232c71 size: 948 The push refers to a repository [localhost:32974/kubevirt/disks-images-provider] 031ac8f2509a: Preparing df0d85013ae0: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 031ac8f2509a: Pushed df0d85013ae0: Pushed devel: digest: sha256:0a781ba0f345d564653bd766261d224da653d3eb7df0f5abeb67f1fcb1226455 size: 948 The push refers to a repository [localhost:32974/kubevirt/vm-killer] c6d1250c13a6: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider c6d1250c13a6: Pushed devel: digest: sha256:5680325ca88ab683e8ee0ce34f458871a0ea66b9031da653a8255dfdea55ffa2 size: 740 The push refers to a repository [localhost:32974/kubevirt/registry-disk-v1alpha] 3e288742e937: Preparing 7c38bbdf0880: Preparing 25edbec0eaea: Preparing 3e288742e937: Pushed 7c38bbdf0880: Pushed 25edbec0eaea: Pushed devel: digest: sha256:2c4bce549c7130c9b25183e6b8ff2d59d86b0e679a57b41b0efa5bebf9dee583 size: 948 The push refers to a repository [localhost:32974/kubevirt/cirros-registry-disk-demo] a899196b92d1: Preparing 3e288742e937: Preparing 7c38bbdf0880: Preparing 25edbec0eaea: Preparing 7c38bbdf0880: Mounted from kubevirt/registry-disk-v1alpha 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 3e288742e937: Mounted from kubevirt/registry-disk-v1alpha a899196b92d1: Pushed devel: digest: sha256:5454dcadb097cd68f295984545c12abb43aeeacde79b1e0e8a64a55119f1bf11 size: 1160 The push refers to a repository [localhost:32974/kubevirt/fedora-cloud-registry-disk-demo] aac41f162526: Preparing 3e288742e937: Preparing 7c38bbdf0880: Preparing 25edbec0eaea: Preparing 3e288742e937: Mounted from kubevirt/cirros-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo 7c38bbdf0880: Mounted from kubevirt/cirros-registry-disk-demo aac41f162526: Pushed devel: digest: sha256:b0168476647c9b25e598d6123cd4b3e0b4797127716e28b6f0acd0304d343c3f size: 1161 The push refers to a repository [localhost:32974/kubevirt/alpine-registry-disk-demo] 92fe70a24761: Preparing 3e288742e937: Preparing 7c38bbdf0880: Preparing 25edbec0eaea: Preparing 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo 7c38bbdf0880: Mounted from kubevirt/fedora-cloud-registry-disk-demo 3e288742e937: Mounted from kubevirt/fedora-cloud-registry-disk-demo 92fe70a24761: Pushed devel: digest: sha256:14e0b91736ca44747541e9799c0909b4ad13e9eed7036941119c6f8cf63ee57e size: 1160 The push refers to a repository [localhost:32974/kubevirt/subresource-access-test] e5e580d4ea9c: Preparing c3b63a8b92e2: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer c3b63a8b92e2: Pushed e5e580d4ea9c: Pushed devel: digest: sha256:e0fb3df30631d8102a822591c6c50fafd23d5f713fa016c8367e25b3080803eb size: 948 The push refers to a repository [localhost:32974/kubevirt/winrmcli] 03859482cdc2: Preparing a0f8b95b0bdd: Preparing 2aa87109f2ed: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test 03859482cdc2: Pushed 2aa87109f2ed: Pushed a0f8b95b0bdd: Pushed devel: digest: sha256:d75c7d87431edda3eeae4a8a02a774789bb14105c171bc6ed0141bb778390775 size: 1165 The push refers to a repository [localhost:32974/kubevirt/example-hook-sidecar] e361c02ae0df: Preparing 39bae602f753: Preparing e361c02ae0df: Pushed 39bae602f753: Pushed devel: digest: sha256:31e2133f7faf56dc6723d42daa0e7d5b9a4d9d31e8d6696f4f2b6b5688588bbf size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.4 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.4-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.4-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.4-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-139-gc1e442c ++ KUBEVIRT_VERSION=v0.7.0-139-gc1e442c + source cluster/k8s-1.10.4/provider.sh ++ set -e ++ image=k8s-1.10.4@sha256:09ac918cc16f13a5d0af51d4c98e3e25cbf4f97b7b32fe18ec61b32f04ca1009 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.4 ++ KUBEVIRT_PROVIDER=k8s-1.10.4 ++ source hack/config-default.sh source hack/config-k8s-1.10.4.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.4.sh ++ source hack/config-provider-k8s-1.10.4.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.4/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.4/.kubectl +++ docker_prefix=localhost:32974/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig ++ cluster/k8s-1.10.4/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig ++ cluster/k8s-1.10.4/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.4 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.4-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.4-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.4-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-139-gc1e442c ++ KUBEVIRT_VERSION=v0.7.0-139-gc1e442c + source cluster/k8s-1.10.4/provider.sh ++ set -e ++ image=k8s-1.10.4@sha256:09ac918cc16f13a5d0af51d4c98e3e25cbf4f97b7b32fe18ec61b32f04ca1009 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.4 ++ KUBEVIRT_PROVIDER=k8s-1.10.4 ++ source hack/config-default.sh source hack/config-k8s-1.10.4.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.4.sh ++ source hack/config-provider-k8s-1.10.4.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.4/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.4/.kubectl +++ docker_prefix=localhost:32974/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.4-release ]] + [[ k8s-1.10.4-release =~ .*-dev ]] + [[ k8s-1.10.4-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created service "virt-api" created deployment.extensions "virt-api" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.4/.kubeconfig + cluster/k8s-1.10.4/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.4 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79764579-d4h62 0/1 ContainerCreating 0 12s virt-api-7d79764579-dpnls 0/1 ContainerCreating 0 12s virt-controller-7d57d96b65-48mp6 0/1 ContainerCreating 0 12s virt-handler-4fdwh 0/1 ContainerCreating 0 11s virt-handler-8nbpf 0/1 ContainerCreating 0 12s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-8746n 0/1 ContainerCreating 0 4s disks-images-provider-mbzhm 0/1 ContainerCreating 0 4s virt-api-7d79764579-d4h62 0/1 ContainerCreating 0 13s virt-api-7d79764579-dpnls 0/1 ContainerCreating 0 13s virt-controller-7d57d96b65-48mp6 0/1 ContainerCreating 0 13s virt-handler-4fdwh 0/1 ContainerCreating 0 12s virt-handler-8nbpf 0/1 ContainerCreating 0 13s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79764579-dpnls 0/1 ContainerCreating 0 48s virt-handler-4fdwh 0/1 ContainerCreating 0 47s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running + true + sleep 30 + current_time=60 + '[' 60 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' + current_time=0 ++ grep false ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-8746n 1/1 Running 0 1m disks-images-provider-mbzhm 1/1 Running 0 1m etcd-node01 1/1 Running 0 10m kube-apiserver-node01 1/1 Running 0 9m kube-controller-manager-node01 1/1 Running 0 9m kube-dns-86f4d74b45-qmwwn 3/3 Running 0 10m kube-flannel-ds-275ct 1/1 Running 0 10m kube-flannel-ds-kmkqb 1/1 Running 0 10m kube-proxy-pqfsw 1/1 Running 0 10m kube-proxy-vkhwl 1/1 Running 0 10m kube-scheduler-node01 1/1 Running 0 10m virt-api-7d79764579-d4h62 1/1 Running 0 1m virt-api-7d79764579-dpnls 1/1 Running 0 1m virt-controller-7d57d96b65-48mp6 1/1 Running 0 1m virt-controller-7d57d96b65-p4lx8 1/1 Running 0 1m virt-handler-4fdwh 1/1 Running 0 1m virt-handler-8nbpf 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:13:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/junit.xml' + [[ k8s-1.10.4-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.4-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:559a45ac63f40982ccce3a1b80cb62788566f2032c847ad9c45ee993eb9c48d4 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532688265 Will run 148 of 148 specs • ------------------------------ • [SLOW TEST:6.078 seconds] VMIPreset /root/go/src/kubevirt.io/kubevirt/tests/vmipreset_test.go:42 CRD Validation /root/go/src/kubevirt.io/kubevirt/tests/vmipreset_test.go:90 should reject POST if validation webhoook deems the spec is invalid /root/go/src/kubevirt.io/kubevirt/tests/vmipreset_test.go:103 ------------------------------ ••• ------------------------------ • [SLOW TEST:8.891 seconds] VMIPreset /root/go/src/kubevirt.io/kubevirt/tests/vmipreset_test.go:42 Preset Matching /root/go/src/kubevirt.io/kubevirt/tests/vmipreset_test.go:135 Should reject presets that conflict with VirtualMachineInstance settings /root/go/src/kubevirt.io/kubevirt/tests/vmipreset_test.go:167 ------------------------------ ••••• ------------------------------ • [SLOW TEST:32.869 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:8.320 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:10.032 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:6.114 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••• ------------------------------ • [SLOW TEST:29.203 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ • [SLOW TEST:6.389 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should remove VirtualMachineInstance once the VMI is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:204 ------------------------------ • ------------------------------ • [SLOW TEST:86.665 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:75.602 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:44.480 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ • [SLOW TEST:299.871 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 ------------------------------ • [SLOW TEST:75.979 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ • [SLOW TEST:283.807 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 ------------------------------ 2018/07/27 07:00:50 read closing down: EOF 2018/07/27 07:00:50 read closing down: EOF 2018/07/27 07:00:50 read closing down: EOF VM testvmi8jpxv was scheduled to start • [SLOW TEST:16.365 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ VM testvmi254m6 was scheduled to stop • [SLOW TEST:43.441 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ •volumedisk0 compute ------------------------------ • [SLOW TEST:48.798 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 2018/07/27 07:02:39 read closing down: EOF with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 ------------------------------ • [SLOW TEST:20.274 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.217 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:160 ------------------------------ • ------------------------------ • [SLOW TEST:110.826 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:284 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:285 ------------------------------ 2018/07/27 07:04:51 read closing down: EOF 2018/07/27 07:06:56 read closing down: EOF • [SLOW TEST:125.157 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model equals to passthrough /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:312 should report exactly the same model as node CPU /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:313 ------------------------------ • [SLOW TEST:111.774 seconds] 2018/07/27 07:08:48 read closing down: EOF Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:336 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:337 ------------------------------ • [SLOW TEST:51.622 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:357 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:380 ------------------------------ 2018/07/27 07:09:39 read closing down: EOF • [SLOW TEST:6.309 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 with correct permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:51 should be allowed to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:52 ------------------------------ ••• ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 should succeed to generate a VM JSON file using oc-process command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:150 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1390 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 should succeed to create a VM using oc-create command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:156 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1390 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 should succeed to launch a VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:161 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1390 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 with given VMI from the VM /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:163 should succeed to terminate the VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:166 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1390 ------------------------------ • [SLOW TEST:59.807 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 2018/07/27 07:10:57 read closing down: EOF A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 ------------------------------ • [SLOW TEST:45.655 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data 2018/07/27 07:11:43 read closing down: EOF /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 ------------------------------ • [SLOW TEST:159.467 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 ------------------------------ 2018/07/27 07:14:23 read closing down: EOF 2018/07/27 07:15:09 read closing down: EOF • [SLOW TEST:57.096 seconds] 2018/07/27 07:15:20 read closing down: EOF CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:118 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:119 ------------------------------ 2018/07/27 07:16:06 read closing down: EOF • [SLOW TEST:46.176 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:162 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1352 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1352 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1352 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1352 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1352 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1352 ------------------------------ • [SLOW TEST:35.627 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ • [SLOW TEST:48.434 seconds] 2018/07/27 07:17:30 read closing down: EOF Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:48.062 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/27 07:18:18 read closing down: EOF 2018/07/27 07:20:28 read closing down: EOF • [SLOW TEST:134.400 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/27 07:22:25 read closing down: EOF • [SLOW TEST:116.237 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/27 07:23:19 read closing down: EOF • [SLOW TEST:50.678 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 ------------------------------ 2018/07/27 07:24:10 read closing down: EOF • [SLOW TEST:50.806 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 ------------------------------ 2018/07/27 07:25:00 read closing down: EOF • [SLOW TEST:50.242 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 ------------------------------ • [SLOW TEST:107.874 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 ------------------------------ 2018/07/27 07:26:48 read closing down: EOF 2018/07/27 07:26:48 read closing down: EOF 2018/07/27 07:29:00 read closing down: EOF • [SLOW TEST:131.310 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 ------------------------------ • [SLOW TEST:70.226 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:15.041 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:29.536 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • ------------------------------ • [SLOW TEST:16.238 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:76 ------------------------------ • [SLOW TEST:17.151 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:82 ------------------------------ ••••2018/07/27 07:32:20 read closing down: EOF ------------------------------ • [SLOW TEST:50.514 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/27 07:32:46 read closing down: EOF • [SLOW TEST:26.345 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:15.148 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:203 ------------------------------ • [SLOW TEST:15.879 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:233 ------------------------------ • [SLOW TEST:35.312 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:281 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:282 ------------------------------ • [SLOW TEST:23.490 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:304 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:305 ------------------------------ • [SLOW TEST:82.529 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:335 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:336 ------------------------------ • [SLOW TEST:96.822 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:366 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:405 ------------------------------ • [SLOW TEST:16.571 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with node tainted /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:458 the vmi with tolerations should be scheduled /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:480 ------------------------------ • ------------------------------ S [SKIPPING] [0.229 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:530 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 ------------------------------ S [SKIPPING] [0.055 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:530 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.066 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:591 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:603 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:599 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.066 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:591 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:640 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:599 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.072 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:591 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:684 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:599 ------------------------------ •••• ------------------------------ • [SLOW TEST:17.569 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:836 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:837 ------------------------------ • [SLOW TEST:34.523 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:868 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:870 ------------------------------ • [SLOW TEST:21.692 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:868 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:893 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:894 ------------------------------ • [SLOW TEST:29.859 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:945 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:946 ------------------------------ • [SLOW TEST:25.648 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:945 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:973 ------------------------------ • [SLOW TEST:18.482 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:19.064 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:17.587 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ 2018/07/27 07:41:29 read closing down: EOF • [SLOW TEST:50.101 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:68 ------------------------------ • [SLOW TEST:53.098 seconds] 2018/07/27 07:42:22 read closing down: EOF Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:78 ------------------------------ 2018/07/27 07:43:13 read closing down: EOF 2018/07/27 07:43:15 read closing down: EOF 2018/07/27 07:43:15 read closing down: EOF 2018/07/27 07:43:16 read closing down: EOF • [SLOW TEST:53.770 seconds] 2018/07/27 07:43:16 read closing down: EOF Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:87 ------------------------------ • [SLOW TEST:16.058 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should wait until the virtual machine is in running state and return a stream interface /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:103 ------------------------------ • [SLOW TEST:30.218 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should fail waiting for the virtual machine instance to be running /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:111 ------------------------------ • [SLOW TEST:30.222 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should fail waiting for the expecter /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:134 ------------------------------ 2018/07/27 07:45:59 read closing down: EOF 2018/07/27 07:46:09 read closing down: EOF 2018/07/27 07:46:20 read closing down: EOF 2018/07/27 07:46:30 read closing down: EOF 2018/07/27 07:46:31 read closing down: EOF 2018/07/27 07:46:32 read closing down: EOF • [SLOW TEST:119.895 seconds] 2018/07/27 07:46:32 read closing down: EOF 2018/07/27 07:46:32 read closing down: EOF Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/27 07:46:34 read closing down: EOF 2018/07/27 07:46:34 read closing down: EOF 2018/07/27 07:46:34 read closing down: EOF •2018/07/27 07:46:35 read closing down: EOF •2018/07/27 07:46:36 read closing down: EOF 2018/07/27 07:46:36 read closing down: EOF 2018/07/27 07:46:37 read closing down: EOF 2018/07/27 07:46:38 read closing down: EOF •2018/07/27 07:46:38 read closing down: EOF ------------------------------ • [SLOW TEST:5.071 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••••••2018/07/27 07:47:57 read closing down: EOF ------------------------------ • [SLOW TEST:54.654 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:383 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:384 ------------------------------ 2018/07/27 07:47:57 read closing down: EOF 2018/07/27 07:47:57 read closing down: EOF 2018/07/27 07:47:58 read closing down: EOF •2018/07/27 07:48:51 read closing down: EOF 2018/07/27 07:48:52 read closing down: EOF ------------------------------ • [SLOW TEST:53.853 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:417 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:418 ------------------------------ 2018/07/27 07:49:46 read closing down: EOF 2018/07/27 07:49:47 read closing down: EOF • [SLOW TEST:54.845 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address in non-conventional format /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:429 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:430 ------------------------------ 2018/07/27 07:50:43 read closing down: EOF • [SLOW TEST:56.562 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 2018/07/27 07:50:43 read closing down: EOF VirtualMachineInstance with custom MAC address and slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:442 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:443 ------------------------------ 2018/07/27 07:51:38 read closing down: EOF • [SLOW TEST:55.766 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with disabled automatic attachment of interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:455 should not configure any external interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:456 ------------------------------ 2018/07/27 07:51:39 read closing down: EOF 2018/07/27 07:52:35 read closing down: EOF Service cluster-ip-vmi successfully exposed for virtualmachineinstance testvminpq7m • [SLOW TEST:61.302 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service cluster-ip-target-vmi successfully exposed for virtualmachineinstance testvminpq7m •Service node-port-vmi successfully exposed for virtualmachineinstance testvminpq7m ------------------------------ • [SLOW TEST:8.152 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:124 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:129 ------------------------------ 2018/07/27 07:53:47 read closing down: EOF Service cluster-ip-udp-vmi successfully exposed for virtualmachineinstance testvmidv46q • [SLOW TEST:63.296 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:173 Should expose a ClusterIP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:177 ------------------------------ Service node-port-udp-vmi successfully exposed for virtualmachineinstance testvmidv46q Pod name: disks-images-provider-8746n Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-mbzhm Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-d4h62 Pod phase: Running 2018/07/27 11:53:40 http: TLS handshake error from 10.244.0.1:52174: EOF level=info timestamp=2018-07-27T11:53:42.244627Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 11:53:50 http: TLS handshake error from 10.244.0.1:52198: EOF level=info timestamp=2018-07-27T11:53:53.488173Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T11:53:53.575473Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 11:54:00 http: TLS handshake error from 10.244.0.1:52222: EOF 2018/07/27 11:54:10 http: TLS handshake error from 10.244.0.1:52246: EOF level=info timestamp=2018-07-27T11:54:12.290553Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 11:54:20 http: TLS handshake error from 10.244.0.1:52270: EOF level=info timestamp=2018-07-27T11:54:23.515140Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-27T11:54:23.619136Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/27 11:54:30 http: TLS handshake error from 10.244.0.1:52294: EOF 2018/07/27 11:54:40 http: TLS handshake error from 10.244.0.1:52318: EOF level=info timestamp=2018-07-27T11:54:42.243491Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/27 11:54:50 http: TLS handshake error from 10.244.0.1:52342: EOF Pod name: virt-api-7d79764579-dpnls Pod phase: Running 2018/07/27 11:52:27 http: TLS handshake error from 10.244.1.1:55986: EOF 2018/07/27 11:52:37 http: TLS handshake error from 10.244.1.1:55992: EOF 2018/07/27 11:52:47 http: TLS handshake error from 10.244.1.1:56002: EOF 2018/07/27 11:52:57 http: TLS handshake error from 10.244.1.1:56008: EOF 2018/07/27 11:53:07 http: TLS handshake error from 10.244.1.1:56014: EOF 2018/07/27 11:53:17 http: TLS handshake error from 10.244.1.1:56020: EOF 2018/07/27 11:53:27 http: TLS handshake error from 10.244.1.1:56026: EOF 2018/07/27 11:53:37 http: TLS handshake error from 10.244.1.1:56032: EOF 2018/07/27 11:53:47 http: TLS handshake error from 10.244.1.1:56038: EOF 2018/07/27 11:53:57 http: TLS handshake error from 10.244.1.1:56044: EOF 2018/07/27 11:54:07 http: TLS handshake error from 10.244.1.1:56050: EOF 2018/07/27 11:54:17 http: TLS handshake error from 10.244.1.1:56056: EOF 2018/07/27 11:54:27 http: TLS handshake error from 10.244.1.1:56062: EOF 2018/07/27 11:54:37 http: TLS handshake error from 10.244.1.1:56068: EOF 2018/07/27 11:54:47 http: TLS handshake error from 10.244.1.1:56074: EOF Pod name: virt-controller-7d57d96b65-48mp6 Pod phase: Running level=info timestamp=2018-07-27T11:47:57.825986Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv5x28 kind= uid=e7f5983b-9192-11e8-bb36-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T11:47:57.827210Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiv5x28 kind= uid=e7f5983b-9192-11e8-bb36-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T11:48:51.679190Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi7p24w kind= uid=080f0d82-9193-11e8-bb36-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T11:48:51.680419Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi7p24w kind= uid=080f0d82-9193-11e8-bb36-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T11:48:51.757117Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi7p24w\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi7p24w" level=info timestamp=2018-07-27T11:49:46.527918Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizmjvq kind= uid=28bfbfed-9193-11e8-bb36-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T11:49:46.529938Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizmjvq kind= uid=28bfbfed-9193-11e8-bb36-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T11:50:43.090204Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T11:50:43.091409Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T11:51:38.854948Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvminpq7m kind= uid=6bb3cdde-9193-11e8-bb36-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T11:51:38.856185Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvminpq7m kind= uid=6bb3cdde-9193-11e8-bb36-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T11:51:38.973006Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvminpq7m\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvminpq7m" level=info timestamp=2018-07-27T11:52:48.340571Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-27T11:52:48.341647Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-27T11:52:48.436999Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmidv46q\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmidv46q" Pod name: virt-controller-7d57d96b65-pr4h2 Pod phase: Running level=info timestamp=2018-07-27T11:16:09.204121Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-2pblj Pod phase: Running level=info timestamp=2018-07-27T11:49:07.074911Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-27T11:49:07.100581Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi7p24w kind= uid=080f0d82-9193-11e8-bb36-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-27T11:49:07.100722Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmi7p24w kind= uid=080f0d82-9193-11e8-bb36-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-27T11:49:07.104233Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi7p24w kind= uid=080f0d82-9193-11e8-bb36-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-27T11:50:58.198150Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-27T11:50:59.155322Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-27T11:50:59.155502Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmirzbzm kind=Domain uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Domain is in state Paused reason StartingUp" level=info timestamp=2018-07-27T11:50:59.525764Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-27T11:50:59.525927Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmirzbzm kind=Domain uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-27T11:50:59.548378Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-27T11:50:59.549947Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-27T11:50:59.550008Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="No update processing required" level=info timestamp=2018-07-27T11:50:59.567674Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-27T11:50:59.567760Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-27T11:50:59.571182Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-rbbg2 Pod phase: Running level=info timestamp=2018-07-27T11:51:54.536721Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvminpq7m kind= uid=6bb3cdde-9193-11e8-bb36-525500d15501 msg="No update processing required" level=info timestamp=2018-07-27T11:51:54.557423Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminpq7m kind= uid=6bb3cdde-9193-11e8-bb36-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-27T11:51:54.557560Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvminpq7m kind= uid=6bb3cdde-9193-11e8-bb36-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-27T11:51:54.561830Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminpq7m kind= uid=6bb3cdde-9193-11e8-bb36-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-27T11:53:02.619690Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-27T11:53:03.573201Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-27T11:53:03.573958Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmidv46q kind=Domain uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Domain is in state Paused reason StartingUp" level=info timestamp=2018-07-27T11:53:03.930970Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-27T11:53:03.931728Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmidv46q kind=Domain uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-27T11:53:03.953865Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-27T11:53:03.953971Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="No update processing required" level=info timestamp=2018-07-27T11:53:03.965113Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-27T11:53:03.985939Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-27T11:53:03.986077Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-27T11:53:04.018984Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat2t6j6 Pod phase: Succeeded ++ head -n 1 +++ nc 192.168.66.102 30017 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcat7h974 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.85 1500 -i 1 -w 1 Hello World! + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 succeeded Pod name: netcat9ghmp Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.85 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcat9wq6m Pod phase: Succeeded ++ head -n 1 +++ nc -ul 28016 +++ echo +++ nc -up 28016 10.106.171.65 28017 -i 1 -w 1 Hello UDP World! succeeded + x='Hello UDP World!' + echo 'Hello UDP World!' + '[' 'Hello UDP World!' = 'Hello UDP World!' ']' + echo succeeded + exit 0 Pod name: netcatb75t4 Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcatbmq59 Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatdxhp2 Pod phase: Succeeded ++ head -n 1 +++ nc 10.97.52.172 27017 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatn6z9c Pod phase: Succeeded ++ head -n 1 +++ echo +++ nc -ul 29016 +++ nc -up 29016 10.97.172.120 29017 -i 1 -w 1 + x='Hello UDP World!' + echo 'Hello UDP World!' + '[' 'Hello UDP World!' = 'Hello UDP World!' ']' + echo succeeded + exit 0 Hello UDP World! succeeded Pod name: netcatn7cbq Pod phase: Running ++ head -n 1 +++ echo +++ nc -ul 31016 +++ nc -up 31016 192.168.66.101 31017 -i 1 -w 1 Pod name: netcatr944c Pod phase: Succeeded ++ head -n 1 +++ nc 192.168.66.101 30017 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatw6gnm Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.85 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatwt879 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.85 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatz6mpm Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: virt-launcher-testvmi7nw7d-ll8fp Pod phase: Running level=info timestamp=2018-07-27T11:44:47.112649Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-27T11:44:47.672130Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-27T11:44:47.676663Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:47.926590Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:44:47.951651Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:44:47.964051Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:47.964203Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:44:47.970305Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi7nw7d kind= uid=6db7e6a2-9192-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:44:47.987133Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi7nw7d kind= uid=6db7e6a2-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:44:47.988618Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:44:47.991727Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:48.019014Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi7nw7d kind= uid=6db7e6a2-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:44:48.104577Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID e7aef3a5-9d1c-4488-bdc9-74d0c4d4d2ca" level=info timestamp=2018-07-27T11:44:48.104776Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:44:49.108346Z pos=monitor.go:222 component=virt-launcher msg="Found PID for e7aef3a5-9d1c-4488-bdc9-74d0c4d4d2ca: 181" Pod name: virt-launcher-testvmi7p24w-7l2pm Pod phase: Running level=info timestamp=2018-07-27T11:49:06.105487Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-27T11:49:06.765767Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-27T11:49:06.769998Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID b997111c-c05e-463f-889f-3053d795ac0a" level=info timestamp=2018-07-27T11:49:06.770192Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:49:06.776313Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:49:07.013658Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:49:07.036664Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:49:07.038681Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:49:07.038736Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:49:07.059791Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi7p24w kind= uid=080f0d82-9193-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:49:07.062718Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi7p24w kind= uid=080f0d82-9193-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:49:07.070094Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:49:07.075280Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:49:07.103552Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi7p24w kind= uid=080f0d82-9193-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:49:07.775882Z pos=monitor.go:222 component=virt-launcher msg="Found PID for b997111c-c05e-463f-889f-3053d795ac0a: 174" Pod name: virt-launcher-testvmidt75d-gq9bv Pod phase: Running level=info timestamp=2018-07-27T11:44:54.086318Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-27T11:44:55.442120Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-27T11:44:55.462278Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:55.494793Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID 649543f3-0c58-4941-9376-7685f2c0d477" level=info timestamp=2018-07-27T11:44:55.495230Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:44:56.159297Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:44:56.205336Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:44:56.215650Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:56.276180Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:44:56.303340Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmidt75d kind= uid=6db6026c-9192-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:44:56.306131Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmidt75d kind= uid=6db6026c-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:44:56.306841Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:44:56.315558Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:56.327763Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmidt75d kind= uid=6db6026c-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:44:56.503821Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 649543f3-0c58-4941-9376-7685f2c0d477: 198" Pod name: virt-launcher-testvmidv46q-bgjfl Pod phase: Running level=info timestamp=2018-07-27T11:53:02.866100Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-27T11:53:03.564086Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-27T11:53:03.574677Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:53:03.586911Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID fb719bb8-3e43-48d7-a8a6-e42b72179e11" level=info timestamp=2018-07-27T11:53:03.587124Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:53:03.880895Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:53:03.924655Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:53:03.931970Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:53:03.935276Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:53:03.937129Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:53:03.939020Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:53:03.960252Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:53:03.968610Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:53:04.016729Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmidv46q kind= uid=951d9d2a-9193-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:53:04.595102Z pos=monitor.go:222 component=virt-launcher msg="Found PID for fb719bb8-3e43-48d7-a8a6-e42b72179e11: 182" Pod name: virt-launcher-testvmigcdgc-bwl8b Pod phase: Running level=info timestamp=2018-07-27T11:47:18.037480Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-27T11:47:18.043408Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:47:18.244860Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID 75a0b6f4-0732-4723-a401-bb11c6d37ffb" level=info timestamp=2018-07-27T11:47:18.245298Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:47:18.334293Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:47:18.361842Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:47:18.363185Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmigcdgc kind= uid=c6e5d4b8-9192-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:47:18.364455Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmigcdgc kind= uid=c6e5d4b8-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:47:18.365221Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:47:18.365283Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:47:18.407515Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:47:18.413734Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:47:18.419629Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmigcdgc kind= uid=c6e5d4b8-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:47:18.426430Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmigcdgc kind= uid=c6e5d4b8-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:47:19.249037Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 75a0b6f4-0732-4723-a401-bb11c6d37ffb: 181" Pod name: virt-launcher-testvminpq7m-cfr7r Pod phase: Running level=info timestamp=2018-07-27T11:51:53.604120Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-27T11:51:54.123489Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-27T11:51:54.127864Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID 716fa63d-d371-4268-a4dc-123f06a9b6f8" level=info timestamp=2018-07-27T11:51:54.130344Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:51:54.130656Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:51:54.484720Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:51:54.510534Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:51:54.511099Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvminpq7m kind= uid=6bb3cdde-9193-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:51:54.512539Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:51:54.512606Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:51:54.516298Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvminpq7m kind= uid=6bb3cdde-9193-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:51:54.532632Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:51:54.534851Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:51:54.561623Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvminpq7m kind= uid=6bb3cdde-9193-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:51:55.140275Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 716fa63d-d371-4268-a4dc-123f06a9b6f8: 185" Pod name: virt-launcher-testvmirkd8m-qxzwv Pod phase: Running level=info timestamp=2018-07-27T11:44:51.554741Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-27T11:44:52.225623Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-27T11:44:52.248471Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:52.271278Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID 687a42c9-8646-478a-b595-8a548d612a4c" level=info timestamp=2018-07-27T11:44:52.271612Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:44:52.588610Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:44:52.642153Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:44:52.645570Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:52.673374Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:44:52.689558Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmirkd8m kind= uid=6db959a0-9192-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:44:52.691104Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmirkd8m kind= uid=6db959a0-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:44:52.708347Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:44:52.712559Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:52.722399Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmirkd8m kind= uid=6db959a0-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:44:53.281564Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 687a42c9-8646-478a-b595-8a548d612a4c: 191" Pod name: virt-launcher-testvmirzbzm-x2twb Pod phase: Running level=info timestamp=2018-07-27T11:50:58.456690Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-27T11:50:59.143651Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-27T11:50:59.150355Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID 514a5779-1130-497e-99ad-2df256ad2707" level=info timestamp=2018-07-27T11:50:59.151643Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:50:59.159923Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:50:59.504160Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:50:59.524472Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:50:59.526668Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:50:59.530930Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:50:59.544299Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:50:59.546843Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:50:59.547536Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:50:59.551886Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:50:59.570517Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmirzbzm kind= uid=4a766d26-9193-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:51:00.158441Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 514a5779-1130-497e-99ad-2df256ad2707: 176" Pod name: virt-launcher-testvmisjv9c-dh6kq Pod phase: Running level=info timestamp=2018-07-27T11:44:52.145291Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-27T11:44:52.147946Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID 299a53c3-702c-4ed9-908d-0bd61e64fd08" level=info timestamp=2018-07-27T11:44:52.151677Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:44:52.152760Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:52.464240Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:44:52.567960Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmisjv9c kind= uid=6db4a417-9192-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:44:52.582399Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:44:52.590302Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisjv9c kind= uid=6db4a417-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:44:52.591135Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:52.591187Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:44:52.642180Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:44:52.644980Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:44:52.657724Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisjv9c kind= uid=6db4a417-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:44:52.664234Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisjv9c kind= uid=6db4a417-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:44:53.164647Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 299a53c3-702c-4ed9-908d-0bd61e64fd08: 185" Pod name: virt-launcher-testvmiv5x28-tms8z Pod phase: Running level=info timestamp=2018-07-27T11:48:12.546688Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-27T11:48:13.113232Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-27T11:48:13.122458Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID 32ada9d7-032e-45d2-a9e1-8e8a1807009c" level=info timestamp=2018-07-27T11:48:13.122665Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:48:13.122770Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:48:13.471454Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:48:13.496804Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmiv5x28 kind= uid=e7f5983b-9192-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:48:13.496962Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:48:13.498907Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:48:13.498989Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:48:13.500505Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiv5x28 kind= uid=e7f5983b-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:48:13.519439Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:48:13.521856Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:48:13.525020Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiv5x28 kind= uid=e7f5983b-9192-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:48:14.126386Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 32ada9d7-032e-45d2-a9e1-8e8a1807009c: 174" Pod name: virt-launcher-testvmizmjvq-2jf5d Pod phase: Running level=info timestamp=2018-07-27T11:50:01.229479Z pos=virt-launcher.go:214 component=virt-launcher msg="Detected domain with UUID ea9f98cb-dfe7-4f8c-aa14-446f31e82f0a" level=info timestamp=2018-07-27T11:50:01.230027Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-27T11:50:01.999988Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-27T11:50:02.034494Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:50:02.038483Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:50:02.043024Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-27T11:50:02.058794Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmizmjvq kind= uid=28bfbfed-9193-11e8-bb36-525500d15501 msg="Domain started." level=info timestamp=2018-07-27T11:50:02.062085Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmizmjvq kind= uid=28bfbfed-9193-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:50:02.063139Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-27T11:50:02.066644Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-27T11:50:02.077748Z pos=converter.go:535 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-07-27T11:50:02.078014Z pos=converter.go:751 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-27T11:50:02.078033Z pos=converter.go:752 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-27T11:50:02.083696Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmizmjvq kind= uid=28bfbfed-9193-11e8-bb36-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-27T11:50:02.235018Z pos=monitor.go:222 component=virt-launcher msg="Found PID for ea9f98cb-dfe7-4f8c-aa14-446f31e82f0a: 176" • Failure [64.906 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:205 Should expose a NodePort service on a VMI and connect to it [It] /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:210 Timed out after 60.006s. Expected : Running to equal : Succeeded /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:247 ------------------------------ STEP: Exposing the service via virtctl command STEP: Getting back the cluster IP given for the service STEP: Starting a pod which tries to reach the VMI via ClusterIP STEP: Getting the node IP from all nodes STEP: Starting a pod which tries to reach the VMI via NodePort STEP: Waiting for the pod to report a successful connection attempt 2018/07/27 07:55:58 read closing down: EOF 2018/07/27 07:56:09 read closing down: EOF Service cluster-ip-vmirs successfully exposed for vmirs replicasetrm5lc • [SLOW TEST:76.452 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VMI replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:253 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:286 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:290 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachine testvmigwm5c VM testvmigwm5c was scheduled to start 2018/07/27 07:57:14 read closing down: EOF • [SLOW TEST:66.459 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:318 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:362 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:363 ------------------------------ • [SLOW TEST:8.799 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to three, to two and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:8.308 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:18.200 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ • [SLOW TEST:5.535 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove VMIs once it is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:169 ------------------------------ • ------------------------------ • [SLOW TEST:5.458 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 ------------------------------ • [SLOW TEST:10.572 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove the finished VM /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:279 ------------------------------ • [SLOW TEST:15.551 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ ••2018/07/27 07:59:26 read closing down: EOF 2018/07/27 08:00:16 read closing down: EOF 2018/07/27 08:00:18 read closing down: EOF ------------------------------ • [SLOW TEST:102.205 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ 2018/07/27 08:00:19 read closing down: EOF • Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 1 Failure: [Fail] Expose Expose UDP service on a VMI Expose NodePort UDP service [It] Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:247 Ran 132 of 148 Specs in 4611.593 seconds FAIL! -- 131 Passed | 1 Failed | 0 Pending | 16 Skipped --- FAIL: TestTests (4611.61s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh