+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/25 16:45:46 Waiting for host: 192.168.66.101:22 2018/07/25 16:45:49 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 16:46:01 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 32.506177 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:ab8f4a6b76769110f73263fd3dab3888bd99e9597f4fdc81dab28b9277b05417 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/25 16:46:48 Waiting for host: 192.168.66.102:22 2018/07/25 16:46:51 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 16:47:03 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 43s v1.10.3 node02 Ready 16s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 44s v1.10.3 node02 Ready 17s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33231/kubevirt/virt-controller:devel Untagged: localhost:33231/kubevirt/virt-controller@sha256:1f4b381ed919b2ad5555e744c3f94fb32c665901f628009fef259877fb9f4510 Deleted: sha256:68a6d7d6806e9ccc5e0d2547f421ad165ce9101b585f7634d44e98042db67f71 Deleted: sha256:d84da4fd3b423175205aaaf1d3ba75c297ef0b136935287d11d4040a522d9292 Deleted: sha256:cb14be1dffc9637419bfd9c3e29deebe3c668d85adb8deb7bc4d729d1c3cf059 Deleted: sha256:a84f8a503592ad5bd70dfe3f75d9c35e9756bb1fd7cb5a1645097e24f52d95b9 Untagged: localhost:33231/kubevirt/virt-launcher:devel Untagged: localhost:33231/kubevirt/virt-launcher@sha256:674f7dc1765cdf69433a9e6214bb3d27bbd8e8da9bdc3ac69a45ee4fde7af056 Deleted: sha256:6ba5f5ccdda80214c00317e156304046b09eee40d2430bea017d47d93d4d534a Deleted: sha256:8fe008b98ba696cac052f20a4504321548bedd5711d682e773429bc93ede4c11 Deleted: sha256:ba199b9e52809b3fa6c7966dc014ea89ca6ef03ec2846780e43cbf77162a7f4c Deleted: sha256:95a43a585e6234de0752ac7bbbc8b05db560c2002d713fef5d1ab27294f316de Deleted: sha256:588c03d7b5888a7c38bd749f10e4f18fad4ca14b4b974d1a795f6f97086a385d Deleted: sha256:bfa6128188d9fbe78d9c2d5333b251208dd169a22bef5fe23011715f7fb8c5aa Deleted: sha256:4cf5b62db1c6c2852067e0e736c467a118b26cd27c86e92ba78e5fa4ee7e61bd Deleted: sha256:853721267665ab01c845150cdf4a603a8d5a6e11a20fd1540858f250de99471a Deleted: sha256:15d0a958d1f6ee66f77341fc7d143e7b9ef1ea4bf8b6934377d7db5db35c82c8 Deleted: sha256:dc9cd2670c6a27580b30083c65284da73617483c264510b72a2f734e3a6c4497 Deleted: sha256:405135d77242a045c05d18fc24b7238c12461baa455bddf5a1516ba93e338d4a Deleted: sha256:9ef232180d9bfad5971773a0a6b330d374db1f2873ee4a154c4223703c8dace1 Untagged: localhost:33231/kubevirt/virt-handler:devel Untagged: localhost:33231/kubevirt/virt-handler@sha256:cad3a744b7335d310b002074a325424203ebaa1eca4d37ecdcd66495629f127a Deleted: sha256:8f798ad5b8c95cf78f719ceef1291a7529b89343c75e0955e9e197678261520c Deleted: sha256:a7e84e6d6347159abc350eb1c550f9c34dae0a64069c284dae29c58e958e2082 Deleted: sha256:19cfc9232cbdaf26651eebf3e527766fad93e7c829b408b7b009fb355c752b77 Deleted: sha256:b5bd50136e21fd91cf8f8feb1a46a1de5c4934f7655a8ba07e53e5d446e88529 Untagged: localhost:33231/kubevirt/virt-api:devel Untagged: localhost:33231/kubevirt/virt-api@sha256:196e641f35cab820704946dc38f59956489e7ae21be2a3188b1395015d43c35e Deleted: sha256:09f9e9d3050a0ebb7b57bb35fa77ad464077a85906c314d8329b988f79efe3d4 Deleted: sha256:febcced5a5be3e787405b9e4f9504118bd82fd21a6897e2c41cc21670511121a Deleted: sha256:decef7de890bee54942cb54b82a42a6c3aa2faeef7d7188de1234acfc3d0f021 Deleted: sha256:f778b6e89eed3125a459c7a39085c908fe1bab33b9754844f590bc6177bd3f4c Untagged: localhost:33231/kubevirt/subresource-access-test:devel Untagged: localhost:33231/kubevirt/subresource-access-test@sha256:1c1a2846c4c735828b7b849b649a79c60bc368fbda04a4b58cf4d03e2dd0a93b Deleted: sha256:bd7e94bed758b6e10501f88a269e33accf64616033cefcc8af0b363baf5d8593 Deleted: sha256:c7e8b8679e92b4317aae5105667d5359980d342332a182eb5d8488d3c47662a9 Deleted: sha256:a5419ccf1ca1db8de1fff43df90a56ee9734a8e8430b1ebfd00e64465aadb425 Deleted: sha256:180ca5a05728db6d59f20b3973e7004413ba0fe9306a97ca68adaef6ceaf3498 Untagged: localhost:33231/kubevirt/example-hook-sidecar:devel Untagged: localhost:33231/kubevirt/example-hook-sidecar@sha256:cc0d7098c503bfc23322559ca2cda7ab12ec11471e34c7e0c35ae527cbdcce45 Deleted: sha256:9f41700a30cfc438c1dbfd70b41fb4dfa5a619f36df7974fe6dc0c6048defe71 Deleted: sha256:d98a212c283704c8a639367c9ec5653c1ac947f5b5212aee1a37a60f11dbf40d Deleted: sha256:abfcc4a15f65b67f88f57e9c51e4397f275f6fae057123f12eee3160a63d4cc9 Deleted: sha256:5138e199e8f9354528c0563207d68927b88fab0e0b0c4918a7b0d05779ead476 sha256:8314c812ee3200233db076e79036b39759d30fc3dd7fe921b323b19b14306fd6 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:8314c812ee3200233db076e79036b39759d30fc3dd7fe921b323b19b14306fd6 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.35 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 1ac62e99a9e7 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> c7b69424a0c5 Step 5/8 : USER 1001 ---> Using cache ---> e60ed5d8e78a Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 4953e2645bb2 Removing intermediate container 898d628edc8c Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 4c3b102f5b3d ---> 6a861a5597ab Removing intermediate container 4c3b102f5b3d Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-controller" '' ---> Running in 1a0acec1fe63 ---> 94af18c7d03c Removing intermediate container 1a0acec1fe63 Successfully built 94af18c7d03c Sending build context to Docker daemon 42.63 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 65f548d54a2e Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 04ae26de19c4 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> b82744621194 Removing intermediate container 169390b8bc54 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 316326cd55dd Removing intermediate container 439c6154a534 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 8dcc61234b3b  ---> 046a325dce34 Removing intermediate container 8dcc61234b3b Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in b6eb07b12d9b  ---> 1d9f92ade184 Removing intermediate container b6eb07b12d9b Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 4587cdc1fcfe Removing intermediate container 6fae5905fbcf Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in e039bd7c564f ---> 172795e19c62 Removing intermediate container e039bd7c564f Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-launcher" '' ---> Running in 48f603bd76ac ---> 6e6f3c1a885e Removing intermediate container 48f603bd76ac Successfully built 6e6f3c1a885e Sending build context to Docker daemon 41.65 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> b4c934ab204f Removing intermediate container 6d1e8bdfe251 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 51c936e30be6 ---> 11fc41776707 Removing intermediate container 51c936e30be6 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-handler" '' ---> Running in f4e7f0f369b7 ---> f248b67213e3 Removing intermediate container f4e7f0f369b7 Successfully built f248b67213e3 Sending build context to Docker daemon 38.76 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 830d77e8a3bb Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 7075b0c3cdfd Step 5/8 : USER 1001 ---> Using cache ---> 4e21374fdc1d Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 195121a3793e Removing intermediate container a318e665d154 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 27c2fff51c38 ---> abae49a15765 Removing intermediate container 27c2fff51c38 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-api" '' ---> Running in e2bfd47fe739 ---> df7b04d3a74f Removing intermediate container e2bfd47fe739 Successfully built df7b04d3a74f Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/7 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 3f571283fdaa Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 2722b024d103 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 8458081a089b Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 95c52cb94d0f Successfully built 95c52cb94d0f Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/5 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 006e94a74def Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "vm-killer" '' ---> Using cache ---> b96459304131 Successfully built b96459304131 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 496290160351 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 081acc82039b Step 3/7 : ENV container docker ---> Using cache ---> 87a43203841c Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> bbc83781e0a9 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> c588d7a778a6 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> e28b44b64988 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "registry-disk-v1alpha" '' ---> Using cache ---> 15dee9c3f228 Successfully built 15dee9c3f228 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32821/kubevirt/registry-disk-v1alpha:devel ---> 15dee9c3f228 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 59e724975b36 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 5aab327c7d42 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 6267f6181ea0 Successfully built 6267f6181ea0 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32821/kubevirt/registry-disk-v1alpha:devel ---> 15dee9c3f228 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 7226abe32103 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> e77a7d24125c Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 1f65ea7e845f Successfully built 1f65ea7e845f Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32821/kubevirt/registry-disk-v1alpha:devel ---> 15dee9c3f228 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 7226abe32103 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 69497b9af146 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 696b2b381ecc Successfully built 696b2b381ecc Sending build context to Docker daemon 35.56 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 939ec18dc9a4 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 52b6bf037d32 Step 5/8 : USER 1001 ---> Using cache ---> 1e1560e0af32 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 00538b565a86 Removing intermediate container fe9323003449 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 14eea1fc98a5 ---> 15e9f1ffb58d Removing intermediate container 14eea1fc98a5 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "subresource-access-test" '' ---> Running in 8fbecaf8d4dc ---> 9f75f87ad16c Removing intermediate container 8fbecaf8d4dc Successfully built 9f75f87ad16c Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/9 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 3129352c97b1 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> fbcd5a15f974 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 6e560dc836a0 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 8a916bbc2352 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 72d00ac082db Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "winrmcli" '' ---> Using cache ---> a78ab99f56bf Successfully built a78ab99f56bf Sending build context to Docker daemon 36.77 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0ae71e3c9e56 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 8f3eff2874cc Removing intermediate container f179667746bd Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 8ce925547f23 ---> e144e108a2f2 Removing intermediate container 8ce925547f23 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Running in f027665fedad ---> a99928306357 Removing intermediate container f027665fedad Successfully built a99928306357 hack/build-docker.sh push The push refers to a repository [localhost:32821/kubevirt/virt-controller] 75c89209645f: Preparing d07058c760ad: Preparing 891e1e4ef82a: Preparing d07058c760ad: Pushed 75c89209645f: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:ba2c766c92b8e236b242d91feaaed25bb2bf2b12cc795940a37c7fb438be64cb size: 949 The push refers to a repository [localhost:32821/kubevirt/virt-launcher] 7ac5d5eea8e9: Preparing 2a0720d3298a: Preparing 11ed4da7b36d: Preparing 18a82621fc68: Preparing 53992c109e91: Preparing 53f12636d41e: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing 53992c109e91: Waiting 53f12636d41e: Waiting b83399358a92: Waiting da38cf808aa5: Waiting 186d8b3e4fd8: Waiting 891e1e4ef82a: Waiting 2a0720d3298a: Pushed 18a82621fc68: Pushed 7ac5d5eea8e9: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 11ed4da7b36d: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 53f12636d41e: Pushed 53992c109e91: Pushed 5eefb9960a36: Pushed devel: digest: sha256:dbb6a3b45cb89e55f819be8d6b4a9fd2125dc0aba87ee904ce64955e251672f9 size: 2828 The push refers to a repository [localhost:32821/kubevirt/virt-handler] bf0469e6b7c6: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher bf0469e6b7c6: Pushed devel: digest: sha256:75fa683f9d3463c8d5e8c4b5cca28077ce19cde1d9805a3034358437ffbdf826 size: 741 The push refers to a repository [localhost:32821/kubevirt/virt-api] aa1db069d514: Preparing 25755ffecaf3: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 25755ffecaf3: Pushed aa1db069d514: Pushed devel: digest: sha256:5f1a54ac6785dcfae996fbee274b744ff9df6c86d07862591e5ca75e3e117e88 size: 948 The push refers to a repository [localhost:32821/kubevirt/disks-images-provider] 5ffe52947a94: Preparing a1bc751fc8a2: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 5ffe52947a94: Pushed a1bc751fc8a2: Pushed devel: digest: sha256:50586299b3a16885ba03d9bc2e7507a938cfffd1b7ac3b88d1a2391952d375e3 size: 948 The push refers to a repository [localhost:32821/kubevirt/vm-killer] 3a82b543c335: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 3a82b543c335: Pushed devel: digest: sha256:64a5fcfe7ef267c040173e7b886fa57506e89c93a3b104b77c105b456d5bf0b9 size: 740 The push refers to a repository [localhost:32821/kubevirt/registry-disk-v1alpha] cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing cb3d1019d03e: Pushed 626899eeec02: Pushed 132d61a890c5: Pushed devel: digest: sha256:a5bc478c406e6c5d542085f492938b5b148204595a77ad4d6a92403c8189b4e9 size: 948 The push refers to a repository [localhost:32821/kubevirt/cirros-registry-disk-demo] 64f73894f0f5: Preparing cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing 132d61a890c5: Waiting cb3d1019d03e: Mounted from kubevirt/registry-disk-v1alpha 626899eeec02: Mounted from kubevirt/registry-disk-v1alpha 132d61a890c5: Mounted from kubevirt/registry-disk-v1alpha 64f73894f0f5: Pushed devel: digest: sha256:b258371daec3009f54b5def2587c2d368694eb3031103d6f050b9017499fdf95 size: 1160 The push refers to a repository [localhost:32821/kubevirt/fedora-cloud-registry-disk-demo] 007095a9be7a: Preparing cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing cb3d1019d03e: Mounted from kubevirt/cirros-registry-disk-demo 132d61a890c5: Mounted from kubevirt/cirros-registry-disk-demo 626899eeec02: Mounted from kubevirt/cirros-registry-disk-demo 007095a9be7a: Pushed devel: digest: sha256:b7bef9e164b128d63cd22e2dea2ff8896203e55affd9c385ce740bd13265fe33 size: 1161 The push refers to a repository [localhost:32821/kubevirt/alpine-registry-disk-demo] caaecc003aa5: Preparing cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing 626899eeec02: Mounted from kubevirt/fedora-cloud-registry-disk-demo 132d61a890c5: Mounted from kubevirt/fedora-cloud-registry-disk-demo cb3d1019d03e: Mounted from kubevirt/fedora-cloud-registry-disk-demo caaecc003aa5: Pushed devel: digest: sha256:c0bde2a98e583ab7bd5114474d35e23be5e27032d0d0cc57b19f0c96a79f6a32 size: 1160 The push refers to a repository [localhost:32821/kubevirt/subresource-access-test] dd82391d0a29: Preparing 5c35b999e0e4: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 5c35b999e0e4: Pushed dd82391d0a29: Pushed devel: digest: sha256:851f91d1d7466d725da949203b4cf54a18647946de96aa07a7ee4042d23ee9c6 size: 948 The push refers to a repository [localhost:32821/kubevirt/winrmcli] d8f4160f7568: Preparing b34315236250: Preparing b4a3c0429828: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test d8f4160f7568: Pushed b4a3c0429828: Pushed b34315236250: Pushed devel: digest: sha256:a78c39eb2015f025a4fb40135cab58ab9abe8d1527ce6536dc2836f455d93fda size: 1165 The push refers to a repository [localhost:32821/kubevirt/example-hook-sidecar] 4cfe73da2afa: Preparing 39bae602f753: Preparing 4cfe73da2afa: Pushed 39bae602f753: Pushed devel: digest: sha256:3c1ce18784ce2086af613f4cc18d27e2a7e7d439fcc55fc9a7ccaba8a3226a08 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-110-g1ff6886 ++ KUBEVIRT_VERSION=v0.7.0-110-g1ff6886 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:32821/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + grep foregroundDeleteVirtualMachine + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ wc -l ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-110-g1ff6886 ++ KUBEVIRT_VERSION=v0.7.0-110-g1ff6886 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:32821/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-release ]] + [[ k8s-1.10.3-release =~ .*-dev ]] + [[ k8s-1.10.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created service "virt-api" created deployment.extensions "virt-api" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79764579-hgtkx 0/1 ContainerCreating 0 3s virt-api-7d79764579-j44zj 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-b4lkv 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-qhtmz 0/1 ContainerCreating 0 3s virt-handler-9sbp7 0/1 ContainerCreating 0 3s virt-handler-wv92d 0/1 ContainerCreating 0 3s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-j6cwx 0/1 ContainerCreating 0 1s disks-images-provider-mclvr 0/1 ContainerCreating 0 1s virt-api-7d79764579-hgtkx 0/1 ContainerCreating 0 4s virt-api-7d79764579-j44zj 0/1 ContainerCreating 0 4s virt-controller-7d57d96b65-b4lkv 0/1 ContainerCreating 0 4s virt-controller-7d57d96b65-qhtmz 0/1 ContainerCreating 0 4s virt-handler-9sbp7 0/1 ContainerCreating 0 4s virt-handler-wv92d 0/1 ContainerCreating 0 4s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n false ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ grep false ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-j6cwx 1/1 Running 0 1m disks-images-provider-mclvr 1/1 Running 0 1m etcd-node01 1/1 Running 0 13m kube-apiserver-node01 1/1 Running 0 14m kube-controller-manager-node01 1/1 Running 0 13m kube-dns-86f4d74b45-q496g 3/3 Running 0 14m kube-flannel-ds-5gqdm 1/1 Running 0 14m kube-flannel-ds-gxk55 1/1 Running 0 14m kube-proxy-2fvmm 1/1 Running 0 14m kube-proxy-flcs7 1/1 Running 0 14m kube-scheduler-node01 1/1 Running 0 13m virt-api-7d79764579-hgtkx 1/1 Running 0 1m virt-api-7d79764579-j44zj 1/1 Running 0 1m virt-controller-7d57d96b65-b4lkv 1/1 Running 0 1m virt-controller-7d57d96b65-qhtmz 1/1 Running 0 1m virt-handler-9sbp7 1/1 Running 0 1m virt-handler-wv92d 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + [[ k8s-1.10.3-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:8314c812ee3200233db076e79036b39759d30fc3dd7fe921b323b19b14306fd6 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532538136 Will run 145 of 145 specs • [SLOW TEST:20.577 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ •• ------------------------------ • [SLOW TEST:33.467 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:34.201 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:123.055 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:112.022 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:46.361 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 ------------------------------ • [SLOW TEST:45.939 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 ------------------------------ • [SLOW TEST:33.591 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 ------------------------------ • [SLOW TEST:79.784 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 ------------------------------ • [SLOW TEST:121.370 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 ------------------------------ • [SLOW TEST:15.362 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.381 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.486 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.490 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:17.393 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ • [SLOW TEST:6.589 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should remove VirtualMachineInstance once the VMI is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:204 ------------------------------ • ------------------------------ • [SLOW TEST:48.246 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:37.887 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:81.595 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ • [SLOW TEST:240.027 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 ------------------------------ • [SLOW TEST:77.748 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ • [SLOW TEST:222.492 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 ------------------------------ VM testvmi9fszl was scheduled to start • [SLOW TEST:17.545 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ VM testvmiphb5k was scheduled to stop • [SLOW TEST:101.897 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ • [SLOW TEST:36.824 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ • ------------------------------ • [SLOW TEST:43.755 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 ------------------------------ • [SLOW TEST:42.323 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 ------------------------------ • [SLOW TEST:102.999 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 ------------------------------ • [SLOW TEST:53.947 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:118 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:119 ------------------------------ • [SLOW TEST:43.519 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:162 ------------------------------ • ------------------------------ • [SLOW TEST:17.984 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:76 ------------------------------ • [SLOW TEST:18.131 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:82 ------------------------------ •••• ------------------------------ • [SLOW TEST:34.475 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:26.179 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:15.574 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:203 ------------------------------ • [SLOW TEST:16.779 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:233 ------------------------------ • [SLOW TEST:38.686 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:281 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:282 ------------------------------ • [SLOW TEST:25.928 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:305 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:306 ------------------------------ • [SLOW TEST:46.917 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:336 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:337 ------------------------------ • [SLOW TEST:126.385 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:367 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:406 ------------------------------ • [SLOW TEST:18.139 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with node tainted /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:459 the vmi with tolerations should be scheduled /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:481 ------------------------------ • ------------------------------ S [SKIPPING] [0.292 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:531 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:536 ------------------------------ S [SKIPPING] [0.101 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:531 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:536 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.126 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:604 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.140 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:641 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.117 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:592 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:685 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:600 ------------------------------ •••• ------------------------------ • [SLOW TEST:18.081 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:837 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:838 ------------------------------ • [SLOW TEST:34.433 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:870 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:871 ------------------------------ Pod name: disks-images-provider-j6cwx Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-mclvr Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-hgtkx Pod phase: Running 2018/07/25 17:38:59 http: TLS handshake error from 10.244.1.1:56922: EOF 2018/07/25 17:39:09 http: TLS handshake error from 10.244.1.1:56928: EOF 2018/07/25 17:39:19 http: TLS handshake error from 10.244.1.1:56934: EOF 2018/07/25 17:39:29 http: TLS handshake error from 10.244.1.1:56940: EOF 2018/07/25 17:39:39 http: TLS handshake error from 10.244.1.1:56948: EOF 2018/07/25 17:39:49 http: TLS handshake error from 10.244.1.1:56954: EOF 2018/07/25 17:39:59 http: TLS handshake error from 10.244.1.1:56960: EOF 2018/07/25 17:40:09 http: TLS handshake error from 10.244.1.1:56966: EOF 2018/07/25 17:40:19 http: TLS handshake error from 10.244.1.1:56972: EOF 2018/07/25 17:40:29 http: TLS handshake error from 10.244.1.1:56978: EOF 2018/07/25 17:40:39 http: TLS handshake error from 10.244.1.1:56984: EOF 2018/07/25 17:40:49 http: TLS handshake error from 10.244.1.1:56990: EOF 2018/07/25 17:40:59 http: TLS handshake error from 10.244.1.1:56996: EOF 2018/07/25 17:41:09 http: TLS handshake error from 10.244.1.1:57002: EOF 2018/07/25 17:41:19 http: TLS handshake error from 10.244.1.1:57008: EOF Pod name: virt-api-7d79764579-j44zj Pod phase: Running level=info timestamp=2018-07-25T17:40:13.124411Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:40:15 http: TLS handshake error from 10.244.0.1:45042: EOF level=info timestamp=2018-07-25T17:40:18.447125Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:40:25 http: TLS handshake error from 10.244.0.1:45066: EOF 2018/07/25 17:40:35 http: TLS handshake error from 10.244.0.1:45090: EOF level=info timestamp=2018-07-25T17:40:41.816969Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T17:40:43.221622Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:40:45 http: TLS handshake error from 10.244.0.1:45114: EOF level=info timestamp=2018-07-25T17:40:48.511509Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:40:55 http: TLS handshake error from 10.244.0.1:45138: EOF 2018/07/25 17:41:05 http: TLS handshake error from 10.244.0.1:45162: EOF level=info timestamp=2018-07-25T17:41:11.506191Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T17:41:13.299972Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 17:41:15 http: TLS handshake error from 10.244.0.1:45186: EOF level=info timestamp=2018-07-25T17:41:18.560497Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-7d57d96b65-9zfx6 Pod phase: Running level=info timestamp=2018-07-25T17:28:24.225987Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-b4lkv Pod phase: Running level=info timestamp=2018-07-25T17:40:11.294552Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizzwzv kind= uid=c7ae5d4f-9031-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T17:40:11.295329Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmizzwzv kind= uid=c7ae5d4f-9031-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T17:40:13.479566Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmik2ll9 kind= uid=c9062efc-9031-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T17:40:13.479857Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmik2ll9 kind= uid=c9062efc-9031-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T17:40:13.791980Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4nfhw kind= uid=c93606f0-9031-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T17:40:13.792270Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4nfhw kind= uid=c93606f0-9031-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T17:40:13.893547Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi4nfhw\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi4nfhw" level=info timestamp=2018-07-25T17:40:14.428425Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwvkj6 kind= uid=c9972335-9031-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T17:40:14.428736Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmiwvkj6 kind= uid=c9972335-9031-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T17:40:14.538686Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmiwvkj6\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmiwvkj6" level=info timestamp=2018-07-25T17:40:32.706676Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T17:40:32.707265Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T17:40:32.922124Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmixwnnm\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmixwnnm" level=info timestamp=2018-07-25T17:41:07.154352Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmilh5d7 kind= uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T17:41:07.157683Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmilh5d7 kind= uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-handler-2jfl2 Pod phase: Running level=info timestamp=2018-07-25T17:40:49.935975Z pos=vm.go:342 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-25T17:40:49.936043Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-25T17:40:49.936789Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T17:40:49.939085Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T17:40:49.939383Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-25T17:40:49.939550Z pos=vm.go:342 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-25T17:40:49.939621Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-25T17:40:49.945641Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T17:40:49.957672Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T17:40:49.957899Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T17:40:49.958122Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T17:41:06.918074Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T17:41:06.918806Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind= uid=d4767512-9031-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T17:41:06.953518Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T17:41:06.953778Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmixwnnm kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-6mhzm Pod phase: Running level=info timestamp=2018-07-25T17:40:11.354657Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmi9xjv9 kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-25T17:40:11.354697Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmi9xjv9 kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-25T17:40:11.354978Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi9xjv9 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T17:40:11.356742Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmi9xjv9 kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-25T17:40:11.356807Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmi9xjv9 kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-25T17:40:11.356952Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi9xjv9 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T17:40:11.364843Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T17:40:11.364991Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmi9xjv9 kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-25T17:40:11.365116Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmi9xjv9 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T17:40:11.365530Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi9xjv9 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T17:40:11.423478Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T17:41:22.559395Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmilh5d7 kind= uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Processing vmi update" level=error timestamp=2018-07-25T17:41:22.595118Z pos=vm.go:397 component=virt-handler namespace=kubevirt-test-default name=testvmilh5d7 kind= uid=e900a60b-9031-11e8-b46b-525500d15501 reason="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-25T17:41:22.623505Z pos=vm.go:251 component=virt-handler reason="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmilh5d7" level=info timestamp=2018-07-25T17:41:22.628914Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmilh5d7 kind= uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Processing vmi update" Pod name: virt-launcher-testvmilh5d7-62nqf Pod phase: Running level=info timestamp=2018-07-25T17:41:11.852702Z pos=manager.go:69 component=virt-launcher msg="Collected all requested hook sidecar sockets" level=info timestamp=2018-07-25T17:41:11.852942Z pos=manager.go:72 component=virt-launcher msg="Sorted all collected sidecar sockets per hook point based on their priority and name: map[]" level=info timestamp=2018-07-25T17:41:11.855363Z pos=libvirt.go:256 component=virt-launcher msg="Connecting to libvirt daemon: qemu:///system" level=info timestamp=2018-07-25T17:41:21.862506Z pos=libvirt.go:271 component=virt-launcher msg="Connected to libvirt daemon" level=info timestamp=2018-07-25T17:41:21.934626Z pos=virt-launcher.go:143 component=virt-launcher msg="Watchdog file created at /var/run/kubevirt/watchdog-files/kubevirt-test-default_testvmilh5d7" level=info timestamp=2018-07-25T17:41:21.938460Z pos=client.go:152 component=virt-launcher msg="Registered libvirt event notify callback" level=info timestamp=2018-07-25T17:41:21.938858Z pos=virt-launcher.go:60 component=virt-launcher msg="Marked as ready" level=error timestamp=2018-07-25T17:41:22.586370Z pos=manager.go:159 component=virt-launcher namespace=kubevirt-test-default name=testvmilh5d7 kind= uid=e900a60b-9031-11e8-b46b-525500d15501 reason="virError(Code=0, Domain=0, Message='Missing error')" msg="Getting the domain failed." level=error timestamp=2018-07-25T17:41:22.586641Z pos=server.go:68 component=virt-launcher namespace=kubevirt-test-default name=testvmilh5d7 kind= uid=e900a60b-9031-11e8-b46b-525500d15501 reason="virError(Code=0, Domain=0, Message='Missing error')" msg="Failed to sync vmi" level=error timestamp=2018-07-25T17:41:22.637108Z pos=common.go:126 component=virt-launcher msg="updated MAC for interface: eth0 - 0a:58:0a:86:aa:5f" level=info timestamp=2018-07-25T17:41:22.640118Z pos=converter.go:739 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-25T17:41:22.640830Z pos=converter.go:740 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-25T17:41:22.641475Z pos=dhcp.go:62 component=virt-launcher msg="Starting SingleClientDHCPServer" level=info timestamp=2018-07-25T17:41:22.763786Z pos=manager.go:157 component=virt-launcher namespace=kubevirt-test-default name=testvmilh5d7 kind= uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Domain defined." level=info timestamp=2018-07-25T17:41:22.764105Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" • Failure [96.319 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:894 should run graceful shutdown [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:895 Unexpected Warning event received: testvmilh5d7,e900a60b-9031-11e8-b46b-525500d15501: server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error') Expected : Warning not to equal : Warning /root/go/src/kubevirt.io/kubevirt/tests/utils.go:245 ------------------------------ STEP: Setting a VirtualMachineInstance termination grace period to 5 STEP: Creating the VirtualMachineInstance level=info timestamp=2018-07-25T17:41:07.863089Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Created virtual machine pod virt-launcher-testvmilh5d7-62nqf" level=info timestamp=2018-07-25T17:41:22.972553Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmilh5d7-62nqf" level=error timestamp=2018-07-25T17:41:23.083792Z pos=utils.go:241 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 reason="unexpected warning event received" msg="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" STEP: Deleting the VirtualMachineInstance level=info timestamp=2018-07-25T17:42:38.058756Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Created virtual machine pod virt-launcher-testvmilh5d7-62nqf" level=info timestamp=2018-07-25T17:42:38.059000Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmilh5d7-62nqf" level=error timestamp=2018-07-25T17:42:38.059780Z pos=utils.go:252 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 reason="Warning event received" msg="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" level=info timestamp=2018-07-25T17:42:38.060041Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="VirtualMachineInstance defined." level=info timestamp=2018-07-25T17:42:38.060509Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="VirtualMachineInstance started." level=info timestamp=2018-07-25T17:42:38.115006Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Deleted virtual machine pod virt-launcher-testvmilh5d7-62nqf" level=info timestamp=2018-07-25T17:42:38.115161Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Signaled Graceful Shutdown" level=info timestamp=2018-07-25T17:42:38.152048Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Signaled Graceful Shutdown" level=info timestamp=2018-07-25T17:42:38.188788Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Signaled Graceful Shutdown" level=info timestamp=2018-07-25T17:42:38.203321Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Signaled Graceful Shutdown" level=info timestamp=2018-07-25T17:42:38.362950Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="Signaled Graceful Shutdown" level=info timestamp=2018-07-25T17:42:43.537380Z pos=utils.go:254 component=tests namespace=kubevirt-test-default name=testvmilh5d7 kind=VirtualMachineInstance uid=e900a60b-9031-11e8-b46b-525500d15501 msg="VirtualMachineInstance stopping" STEP: Checking that virt-handler logs VirtualMachineInstance graceful shutdown STEP: Checking that the VirtualMachineInstance does not exist after grace period • [SLOW TEST:30.736 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:946 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:947 ------------------------------ • [SLOW TEST:25.915 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:946 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:974 ------------------------------ • [SLOW TEST:44.503 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 ------------------------------ • [SLOW TEST:53.880 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 ------------------------------ • [SLOW TEST:38.825 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 ------------------------------ volumedisk0 compute • [SLOW TEST:41.576 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 ------------------------------ • [SLOW TEST:17.477 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.226 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:160 ------------------------------ • ------------------------------ • [SLOW TEST:99.953 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:284 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:285 ------------------------------ • [SLOW TEST:100.395 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model equals to passthrough /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:312 should report exactly the same model as node CPU /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:313 ------------------------------ • [SLOW TEST:100.855 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:336 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:337 ------------------------------ • [SLOW TEST:43.905 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:357 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:380 ------------------------------ • [SLOW TEST:98.557 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • ------------------------------ • [SLOW TEST:116.048 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:17.115 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:28.620 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 should succeed to generate a VM JSON file using oc-process command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:150 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1386 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 should succeed to create a VM using oc-create command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:156 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1386 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 should succeed to launch a VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:161 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1386 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 with given VMI from the VM /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:163 should succeed to terminate the VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:166 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1386 ------------------------------ •• ------------------------------ • [SLOW TEST:8.395 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:21.486 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ • [SLOW TEST:5.668 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove VMIs once it is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:169 ------------------------------ • ------------------------------ • [SLOW TEST:5.687 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 ------------------------------ • [SLOW TEST:14.779 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove the finished VM /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:279 ------------------------------ ••••••••••• ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.017 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1345 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.014 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1345 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.010 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1345 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.008 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1345 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.009 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1345 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.008 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1345 ------------------------------ Service cluster-ip-vmi successfully exposed for virtualmachineinstance testvmivqsmc • [SLOW TEST:51.179 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service cluster-ip-target-vmi successfully exposed for virtualmachineinstance testvmivqsmc •Service node-port-vmi successfully exposed for virtualmachineinstance testvmivqsmc ------------------------------ • [SLOW TEST:8.457 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:124 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:129 ------------------------------ Service cluster-ip-udp-vmi successfully exposed for virtualmachineinstance testvmi22n2c • [SLOW TEST:49.117 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:173 Should expose a ClusterIP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:177 ------------------------------ Service node-port-udp-vmi successfully exposed for virtualmachineinstance testvmi22n2c • [SLOW TEST:8.420 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:205 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:210 ------------------------------ Service cluster-ip-vmirs successfully exposed for vmirs replicasetgqxbr • [SLOW TEST:58.690 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VMI replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:253 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:286 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:290 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachine testvmin4fhj VM testvmin4fhj was scheduled to start • [SLOW TEST:51.322 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:318 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:362 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:363 ------------------------------ • [SLOW TEST:91.993 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••• ------------------------------ • [SLOW TEST:5.271 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •••• ------------------------------ • [SLOW TEST:5.301 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 with a service matching the vmi exposed /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:283 should fail to reach the vmi if an invalid servicename is used /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:314 ------------------------------ • ------------------------------ • [SLOW TEST:36.452 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:379 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:380 ------------------------------ • ------------------------------ • [SLOW TEST:34.969 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:413 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:414 ------------------------------ • [SLOW TEST:31.151 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address in non-conventional format /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:425 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:426 ------------------------------ • [SLOW TEST:37.533 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address and slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:438 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:439 ------------------------------ Pod name: disks-images-provider-j6cwx Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-mclvr Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-hgtkx Pod phase: Running 2018/07/25 18:05:29 http: TLS handshake error from 10.244.1.1:57944: EOF 2018/07/25 18:05:39 http: TLS handshake error from 10.244.1.1:57950: EOF level=info timestamp=2018-07-25T18:05:41.727260Z pos=subresource.go:78 component=virt-api msg="Websocket connection upgraded" 2018/07/25 18:05:49 http: TLS handshake error from 10.244.1.1:57958: EOF level=error timestamp=2018-07-25T18:05:57.036717Z pos=subresource.go:88 component=virt-api msg="connection failed: command terminated with exit code 143" 2018/07/25 18:05:57 http: response.WriteHeader on hijacked connection level=error timestamp=2018-07-25T18:05:57.037980Z pos=subresource.go:100 component=virt-api reason="read tcp 10.244.1.2:8443->10.244.0.0:47286: use of closed network connection" msg="error ecountered reading from websocket stream" level=info timestamp=2018-07-25T18:05:57.038538Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmihv8xg/console proto=HTTP/1.1 statusCode=500 contentLength=0 2018/07/25 18:05:59 http: TLS handshake error from 10.244.1.1:57964: EOF 2018/07/25 18:06:09 http: TLS handshake error from 10.244.1.1:57970: EOF 2018/07/25 18:06:20 http: TLS handshake error from 10.244.1.1:57976: EOF 2018/07/25 18:06:30 http: TLS handshake error from 10.244.1.1:57982: EOF level=info timestamp=2018-07-25T18:06:34.074972Z pos=subresource.go:78 component=virt-api msg="Websocket connection upgraded" 2018/07/25 18:06:39 http: TLS handshake error from 10.244.1.1:57990: EOF 2018/07/25 18:06:49 http: TLS handshake error from 10.244.1.1:57996: EOF Pod name: virt-api-7d79764579-j44zj Pod phase: Running level=info timestamp=2018-07-25T18:06:17.341872Z pos=subresource.go:78 component=virt-api msg="Websocket connection upgraded" level=info timestamp=2018-07-25T18:06:22.518692Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T18:06:25.268532Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 18:06:25 http: TLS handshake error from 10.244.0.1:49236: EOF level=error timestamp=2018-07-25T18:06:34.511099Z pos=subresource.go:88 component=virt-api msg="connection failed: command terminated with exit code 143" 2018/07/25 18:06:34 http: response.WriteHeader on hijacked connection level=error timestamp=2018-07-25T18:06:34.515639Z pos=subresource.go:100 component=virt-api reason="websocket: close 1006 (abnormal closure): unexpected EOF" msg="error ecountered reading from websocket stream" level=info timestamp=2018-07-25T18:06:34.518163Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmirfh8b/console proto=HTTP/1.1 statusCode=500 contentLength=0 2018/07/25 18:06:35 http: TLS handshake error from 10.244.0.1:49266: EOF level=info timestamp=2018-07-25T18:06:41.266747Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T18:06:41.280052Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T18:06:42.220539Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/25 18:06:45 http: TLS handshake error from 10.244.0.1:49290: EOF level=info timestamp=2018-07-25T18:06:51.181755Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-25T18:06:51.188985Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-controller-7d57d96b65-9zfx6 Pod phase: Running level=info timestamp=2018-07-25T17:28:24.225987Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-b4lkv Pod phase: Running level=info timestamp=2018-07-25T18:02:05.575824Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5lzbp kind= uid=d712eb3c-9034-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T18:02:05.575933Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi5lzbp kind= uid=d712eb3c-9034-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T18:02:05.582719Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmikns26\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmikns26" level=info timestamp=2018-07-25T18:02:06.015770Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijtf6r\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmijtf6r" level=info timestamp=2018-07-25T18:04:13.187979Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibxvvz kind= uid=23245869-9035-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T18:04:13.191615Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmibxvvz kind= uid=23245869-9035-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T18:04:13.341227Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmibxvvz\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmibxvvz" level=info timestamp=2018-07-25T18:04:51.078444Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirls84 kind= uid=39ba7e4e-9035-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T18:04:51.080020Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirls84 kind= uid=39ba7e4e-9035-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T18:05:26.042488Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T18:05:26.045600Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T18:05:57.199937Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T18:05:57.204120Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T18:06:34.743788Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi242gt kind= uid=77832530-9035-11e8-b46b-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T18:06:34.756853Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi242gt kind= uid=77832530-9035-11e8-b46b-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-handler-2jfl2 Pod phase: Running level=info timestamp=2018-07-25T18:05:08.733820Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmirls84 kind= uid=39ba7e4e-9035-11e8-b46b-525500d15501 msg="No update processing required" level=info timestamp=2018-07-25T18:05:08.843887Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirls84 kind= uid=39ba7e4e-9035-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T18:05:08.848947Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmirls84 kind= uid=39ba7e4e-9035-11e8-b46b-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-25T18:05:08.970208Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirls84 kind= uid=39ba7e4e-9035-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T18:06:15.763514Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-25T18:06:16.709475Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-25T18:06:16.709734Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmirfh8b kind=Domain uid=6124241c-9035-11e8-b46b-525500d15501 msg="Domain is in state Paused reason StartingUp" level=info timestamp=2018-07-25T18:06:17.028980Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-25T18:06:17.029155Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmirfh8b kind=Domain uid=6124241c-9035-11e8-b46b-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-25T18:06:17.039753Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T18:06:17.039870Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="No update processing required" level=info timestamp=2018-07-25T18:06:17.051252Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-25T18:06:17.078216Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T18:06:17.078366Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-25T18:06:17.083075Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-6mhzm Pod phase: Running level=info timestamp=2018-07-25T18:05:40.441943Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-25T18:05:41.059656Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-25T18:05:41.060096Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmihv8xg kind=Domain uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Domain is in state Paused reason StartingUp" level=info timestamp=2018-07-25T18:05:41.268549Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-25T18:05:41.268735Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmihv8xg kind=Domain uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-25T18:05:41.327874Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T18:05:41.328370Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="No update processing required" level=info timestamp=2018-07-25T18:05:41.335970Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-25T18:05:41.443634Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T18:05:41.443961Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-25T18:05:41.476649Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T18:06:51.047467Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmi242gt kind= uid=77832530-9035-11e8-b46b-525500d15501 msg="Processing vmi update" level=error timestamp=2018-07-25T18:06:51.184715Z pos=vm.go:397 component=virt-handler namespace=kubevirt-test-default name=testvmi242gt kind= uid=77832530-9035-11e8-b46b-525500d15501 reason="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" msg="Synchronizing the VirtualMachineInstance failed." level=info timestamp=2018-07-25T18:06:51.296055Z pos=vm.go:251 component=virt-handler reason="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" msg="re-enqueuing VirtualMachineInstance kubevirt-test-default/testvmi242gt" level=info timestamp=2018-07-25T18:06:51.316559Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmi242gt kind= uid=77832530-9035-11e8-b46b-525500d15501 msg="Processing vmi update" Pod name: netcat5stn6 Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.110 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcat7v9ct Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcath79mp Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.110 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' Hello World! + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 succeeded Pod name: netcatj5s8w Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcatrqm4f Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.110 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatsppvf Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatzb2zf Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.110 1500 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi242gt-gp7wl Pod phase: Running level=info timestamp=2018-07-25T18:06:51.530730Z pos=manager.go:157 component=virt-launcher namespace=kubevirt-test-default name=testvmi242gt kind= uid=77832530-9035-11e8-b46b-525500d15501 msg="Domain defined." level=info timestamp=2018-07-25T18:06:51.531128Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-25T18:06:52.039857Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-25T18:06:52.069544Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:06:52.258815Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID f700fa79-f871-4205-a22b-681394bc4974" level=info timestamp=2018-07-25T18:06:52.259135Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-25T18:06:52.471095Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-25T18:06:52.491958Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:06:52.501641Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:06:52.503515Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-25T18:06:52.514632Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi242gt kind= uid=77832530-9035-11e8-b46b-525500d15501 msg="Domain started." level=info timestamp=2018-07-25T18:06:52.518556Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi242gt kind= uid=77832530-9035-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:06:52.518562Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:06:52.525483Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:06:52.677607Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi242gt kind= uid=77832530-9035-11e8-b46b-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmi5lzbp-gzx4k Pod phase: Running level=info timestamp=2018-07-25T18:02:21.026388Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-25T18:02:21.034710Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID fe7f7fff-9c8f-4cfb-88f9-9e5980a0ff1f" level=info timestamp=2018-07-25T18:02:21.035496Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-25T18:02:21.045381Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:21.839334Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-25T18:02:21.855858Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:02:21.860245Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:21.866644Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-25T18:02:21.877771Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi5lzbp kind= uid=d712eb3c-9034-11e8-b46b-525500d15501 msg="Domain started." level=info timestamp=2018-07-25T18:02:21.879776Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi5lzbp kind= uid=d712eb3c-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:21.883978Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:02:21.899810Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:21.942649Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi5lzbp kind= uid=d712eb3c-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:21.949930Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi5lzbp kind= uid=d712eb3c-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:22.039928Z pos=monitor.go:222 component=virt-launcher msg="Found PID for fe7f7fff-9c8f-4cfb-88f9-9e5980a0ff1f: 189" Pod name: virt-launcher-testvmibxvvz-5g4px Pod phase: Running level=info timestamp=2018-07-25T18:04:29.261109Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-25T18:04:29.764678Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-25T18:04:29.771950Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:04:29.905443Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 5c5d8a49-7dc9-4864-8ac8-8158b79eb874" level=info timestamp=2018-07-25T18:04:29.905639Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-25T18:04:30.162751Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-25T18:04:30.186029Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:04:30.188372Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:04:30.209414Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-25T18:04:30.230496Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmibxvvz kind= uid=23245869-9035-11e8-b46b-525500d15501 msg="Domain started." level=info timestamp=2018-07-25T18:04:30.238356Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibxvvz kind= uid=23245869-9035-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:04:30.241116Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:04:30.245782Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:04:30.310428Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmibxvvz kind= uid=23245869-9035-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:04:30.911763Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 5c5d8a49-7dc9-4864-8ac8-8158b79eb874: 182" Pod name: virt-launcher-testvmihv8xg-zsrpn Pod phase: Running level=info timestamp=2018-07-25T18:05:40.587926Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-25T18:05:41.049694Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-25T18:05:41.060877Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:05:41.243970Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-25T18:05:41.250111Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID ab21c891-5dce-47a6-9e74-5c524054fadc" level=info timestamp=2018-07-25T18:05:41.250890Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-25T18:05:41.267314Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:05:41.269162Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:05:41.291453Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-25T18:05:41.297359Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Domain started." level=info timestamp=2018-07-25T18:05:41.299128Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:05:41.330393Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:05:41.338546Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:05:41.475766Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihv8xg kind= uid=4e92b833-9035-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:05:42.255147Z pos=monitor.go:222 component=virt-launcher msg="Found PID for ab21c891-5dce-47a6-9e74-5c524054fadc: 178" Pod name: virt-launcher-testvmijtf6r-gchsv Pod phase: Running level=info timestamp=2018-07-25T18:02:24.195916Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 447e3e94-4037-4369-9d9b-1e26f7b5054c" level=info timestamp=2018-07-25T18:02:24.196257Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-25T18:02:24.196368Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-25T18:02:24.214715Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:24.655497Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-25T18:02:24.737403Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmijtf6r kind= uid=d707e727-9034-11e8-b46b-525500d15501 msg="Domain started." level=info timestamp=2018-07-25T18:02:24.746974Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijtf6r kind= uid=d707e727-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:24.738573Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:02:24.761127Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:24.761296Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-25T18:02:24.798704Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:02:24.808667Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:24.810859Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijtf6r kind= uid=d707e727-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:24.815051Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijtf6r kind= uid=d707e727-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:25.201050Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 447e3e94-4037-4369-9d9b-1e26f7b5054c: 194" Pod name: virt-launcher-testvmikns26-m8h62 Pod phase: Running level=info timestamp=2018-07-25T18:02:24.495772Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-25T18:02:24.500102Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 24b6ad83-4ba2-4989-98a9-e60327d09e65" level=info timestamp=2018-07-25T18:02:24.500448Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-25T18:02:24.513696Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:24.880378Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-25T18:02:25.026690Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmikns26 kind= uid=d704833b-9034-11e8-b46b-525500d15501 msg="Domain started." level=info timestamp=2018-07-25T18:02:25.027059Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:02:25.037887Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmikns26 kind= uid=d704833b-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:25.039036Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:25.039139Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-25T18:02:25.070535Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:02:25.073208Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmikns26 kind= uid=d704833b-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:25.087575Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:25.090077Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmikns26 kind= uid=d704833b-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:25.508854Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 24b6ad83-4ba2-4989-98a9-e60327d09e65: 194" Pod name: virt-launcher-testvmirfh8b-tsh7x Pod phase: Running level=info timestamp=2018-07-25T18:06:16.719299Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 1cb1a6da-976b-4b21-9150-da2761391a02" level=info timestamp=2018-07-25T18:06:16.732544Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-25T18:06:16.945847Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-25T18:06:17.026647Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:06:17.027401Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="Domain started." level=info timestamp=2018-07-25T18:06:17.031059Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:06:17.031892Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:06:17.031980Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-25T18:06:17.049717Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:06:17.051727Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:06:17.080047Z pos=converter.go:523 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-07-25T18:06:17.080517Z pos=converter.go:739 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-25T18:06:17.080552Z pos=converter.go:740 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-25T18:06:17.082687Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmirfh8b kind= uid=6124241c-9035-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:06:17.736662Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1cb1a6da-976b-4b21-9150-da2761391a02: 191" Pod name: virt-launcher-testvmirls84-r2cft Pod phase: Running level=info timestamp=2018-07-25T18:05:07.658778Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-25T18:05:08.346143Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-25T18:05:08.361759Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:05:08.481634Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 5a82d64c-6d3d-4efa-b37c-1d9c7d67b3c7" level=info timestamp=2018-07-25T18:05:08.481870Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-25T18:05:08.662907Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-25T18:05:08.681116Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:05:08.684090Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:05:08.705162Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-25T18:05:08.726239Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmirls84 kind= uid=39ba7e4e-9035-11e8-b46b-525500d15501 msg="Domain started." level=info timestamp=2018-07-25T18:05:08.728914Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmirls84 kind= uid=39ba7e4e-9035-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:05:08.730001Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:05:08.736522Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:05:08.955024Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmirls84 kind= uid=39ba7e4e-9035-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:05:09.485707Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 5a82d64c-6d3d-4efa-b37c-1d9c7d67b3c7: 182" Pod name: virt-launcher-testvmiz9vzl-mktw4 Pod phase: Running level=info timestamp=2018-07-25T18:02:21.439714Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-25T18:02:22.211392Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-25T18:02:22.215832Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 0549bdf7-64b0-489c-9e3d-ede56551a1d6" level=info timestamp=2018-07-25T18:02:22.216015Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-25T18:02:22.220481Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:22.390808Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-25T18:02:22.408571Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:02:22.413322Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:22.413426Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-25T18:02:22.432084Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmiz9vzl kind= uid=d7014662-9034-11e8-b46b-525500d15501 msg="Domain started." level=info timestamp=2018-07-25T18:02:22.439673Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiz9vzl kind= uid=d7014662-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:22.443606Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-25T18:02:22.451359Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T18:02:22.474856Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmiz9vzl kind= uid=d7014662-9034-11e8-b46b-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-25T18:02:23.221907Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 0549bdf7-64b0-489c-9e3d-ede56551a1d6: 189" • Failure [100.793 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with disabled automatic attachment of interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:451 should not configure any external interfaces [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:452 Unexpected Warning event received: testvmi242gt,77832530-9035-11e8-b46b-525500d15501: server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error') Expected : Warning not to equal : Warning /root/go/src/kubevirt.io/kubevirt/tests/utils.go:245 ------------------------------ STEP: checking loopback is the only guest interface level=info timestamp=2018-07-25T18:06:36.082375Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmi242gt kind=VirtualMachineInstance uid=77832530-9035-11e8-b46b-525500d15501 msg="Created virtual machine pod virt-launcher-testvmi242gt-gp7wl" level=info timestamp=2018-07-25T18:06:51.477095Z pos=utils.go:243 component=tests namespace=kubevirt-test-default name=testvmi242gt kind=VirtualMachineInstance uid=77832530-9035-11e8-b46b-525500d15501 msg="Pod owner ship transferred to the node virt-launcher-testvmi242gt-gp7wl" level=error timestamp=2018-07-25T18:06:51.713548Z pos=utils.go:241 component=tests namespace=kubevirt-test-default name=testvmi242gt kind=VirtualMachineInstance uid=77832530-9035-11e8-b46b-525500d15501 reason="unexpected warning event received" msg="server error. command Launcher.Sync failed: virError(Code=0, Domain=0, Message='Missing error')" • [SLOW TEST:20.130 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:19.364 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:22.391 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ • ------------------------------ • [SLOW TEST:5.074 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 Without permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:56 should not be able to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:57 ------------------------------ •• Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 2 Failures: [Fail] VMIlifecycle Delete a VirtualMachineInstance with grace period greater than 0 [It] should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/utils.go:245 [Fail] Networking VirtualMachineInstance with disabled automatic attachment of interfaces [It] should not configure any external interfaces /root/go/src/kubevirt.io/kubevirt/tests/utils.go:245 Ran 129 of 145 Specs in 4065.101 seconds FAIL! -- 127 Passed | 2 Failed | 0 Pending | 16 Skipped --- FAIL: TestTests (4065.11s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh