+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2 + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2 + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/16 09:55:49 Waiting for host: 192.168.66.101:22 2018/07/16 09:55:52 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/16 09:56:00 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/16 09:56:08 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/16 09:56:14 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 22.507010 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:122e388e045a1d34ebedfa90ed7c0963c44962f3193e7fd0aa78f18651d177fb + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/16 09:57:06 Waiting for host: 192.168.66.102:22 2018/07/16 09:57:09 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/16 09:57:21 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 43s v1.10.3 node02 NotReady 10s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n 'node02 NotReady 10s v1.10.3' ']' + echo 'Waiting for all nodes to become ready ...' Waiting for all nodes to become ready ... + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 44s v1.10.3 node02 Ready 11s v1.10.3 + kubectl_rc=0 + sleep 10 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 54s v1.10.3 node02 Ready 21s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33093/kubevirt/virt-controller:devel Untagged: localhost:33093/kubevirt/virt-controller@sha256:aac4984d7542cffc8bc52351c426d9205dc12db3664ae9cb2b974d78239a9140 Deleted: sha256:64f7ae0020b5e2082687ae6ab5f7112c6c336c8c62a7e9a89a06b7315d3062d6 Deleted: sha256:87b42c35b4fd8d578c1fd14676829eecae00c41cc2c06840dbaf5d3277e81d87 Deleted: sha256:2e5d3529266745e9bf0adee4c34ee985d172f62d1aff5cc220dcc89b3ae31fe9 Deleted: sha256:9ee6ebc4f6edf260dccfb1f87caf7c661874cbbdc8747c28cc0fe01753cd31f6 Untagged: localhost:33093/kubevirt/virt-launcher:devel Untagged: localhost:33093/kubevirt/virt-launcher@sha256:23806e3620d295ca7176b2fe0a657541b3a62903e01022ee3e7b7c1346538d48 Deleted: sha256:8a7633c19a5b91debe8e278100aa74e5164c4444ce97e0eda8183e5e446fc088 Deleted: sha256:96f504edc471132d4c409ab8e36e424048da5bd1c83546ed11ef0df4546ca379 Deleted: sha256:0795c888c604d94ceae0c4454f7bba103fc316d99eec89d5aa0e7f97cb58504f Deleted: sha256:21fe588dfd36e9a463950ac251c175203beed769efeb907c6b1a35558484f26f Deleted: sha256:8c1e60cd37251b608bfc473e10d62a86b874401138a623dc7c0e1ef6710b119f Deleted: sha256:4ec62d2295854345932607c8d40e16568f238450dba57f74876c79077e92296b Deleted: sha256:d719c92ce0bfe1792bf1305261dc45dec81c448549b93f554263837adbc79a71 Deleted: sha256:131d789d73e993086acff4172468f3fedbb4ae84005ea0dee7cb6cb02c284efc Deleted: sha256:76aac423cecb63c0b9f32d1fdda1ae944d83618966eb8674e55a400046e85dcb Deleted: sha256:70a490a70f10f9965f966d2dccd9853c68a7f31c299e5d805531eb3b70f1f411 Deleted: sha256:d1dc56c534bc7e800dfe02b654c1b9ed34ba61f817d6a3a978e265ef00e1a740 Deleted: sha256:b51d11706d2a4a12072053c3790bb7e909b5d2f936744b72909d35728a6052dd Untagged: localhost:33093/kubevirt/virt-handler:devel Untagged: localhost:33093/kubevirt/virt-handler@sha256:7d87fa745e329683fd866765251cc00a44eab383dff9a5f26e478afc7a587336 Deleted: sha256:a22d7e29516f41d9a58924db6d457ecc717a187cca184f925bc9d841a7fb4f46 Deleted: sha256:880eb64cfb63c65b4b77ce4849c04da71c82fafee28af71178088d3cd035cc3a Deleted: sha256:1917b7b0ed2cc3227b337f992e6d05451ad6cd10f46c15d3c69e7215d4f6128a Deleted: sha256:bb32e908ac6bafa845e8b4ad6c9678791694cdd7b56dfe16239ea4ae3363f4d0 Untagged: localhost:33093/kubevirt/virt-api:devel Untagged: localhost:33093/kubevirt/virt-api@sha256:eadd13bc7748470c0e9692e69c4bba0ac39d41d63c19cee167e6d6568d61796f Deleted: sha256:c89792540bffb323e79ba21bd4a58650e26d6b9d5076a913e4a8c25207c04152 Deleted: sha256:be0e5bd358d0ad0f73d7c0b642780d7a055d6cf920656cbce4d57d8bb8fe30ca Deleted: sha256:ce2a0d5af2a9abd932434646b8a86db2c8fbea36afba761e61095dc93623a475 Deleted: sha256:be568eaac46a306fd6f0d5f54590bf94b33f97e5352575ebf7a84774fe06e45b Untagged: localhost:33093/kubevirt/subresource-access-test:devel Untagged: localhost:33093/kubevirt/subresource-access-test@sha256:fdc32c4672849e7b05ca58f4fbec34d3a8f74d82dcf568782c58c9a8d90286b9 Deleted: sha256:6b7c39ccd8096fccbc71dc93870fe0191e8b102c7e7603bfc31ebe2c7833dcdf Deleted: sha256:de670453a9ea5871468fbcf46041b4c4192bcec62b15099c26c6a325a6817209 Deleted: sha256:9c47c703ded53eee381d1997c7660d05e7cdf6538b19803d95ffc71a267c0702 Deleted: sha256:a9e616c735995010d0687e650ce99bc0fbc96454be2ef81fe165f6001677c713 sha256:7cd3760dedb673e0a3082a5eb3582d4cb71738af3f0ee9303e695bd12b9a4ee4 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:7cd3760dedb673e0a3082a5eb3582d4cb71738af3f0ee9303e695bd12b9a4ee4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 38.11 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 45ed71cd684b Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> ba8171a31e93 Step 5/8 : USER 1001 ---> Using cache ---> 6bd535be1fa1 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 7201d4b5edc3 Removing intermediate container 559e95a4ac1f Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 927d935a1c8e ---> 9888f6d13feb Removing intermediate container 927d935a1c8e Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-controller" '' ---> Running in d2f71f77f378 ---> 92c43f9b48fd Removing intermediate container d2f71f77f378 Successfully built 92c43f9b48fd Sending build context to Docker daemon 40.44 MB Step 1/10 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3bbd31ef6597 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 7fca7eb9d4da Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> 6d5109255708 Removing intermediate container 9d46d58a4c5d Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 278077851377 Removing intermediate container f49da46a3614 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in ba56ada26ddc  ---> 9ccf33b48a7d Removing intermediate container ba56ada26ddc Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 55c6a5952bf5  ---> 0f2a647ef0c9 Removing intermediate container 55c6a5952bf5 Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 21b44a0455e4 Removing intermediate container f1a556598295 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in 28a539487e48 ---> 3ba91c126d1c Removing intermediate container 28a539487e48 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-launcher" '' ---> Running in e517f114eb4b ---> 312621a54d91 Removing intermediate container e517f114eb4b Successfully built 312621a54d91 Sending build context to Docker daemon 39.56 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> a8ac7a495e8c Removing intermediate container e6b0d7959872 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in d01e4ea07682 ---> a85ec86c0eb7 Removing intermediate container d01e4ea07682 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-handler" '' ---> Running in 09dc9f5e993e ---> c98d1fbd12a8 Removing intermediate container 09dc9f5e993e Successfully built c98d1fbd12a8 Sending build context to Docker daemon 37.02 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 12e3c00eb78f Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> cfb92cbbf126 Step 5/8 : USER 1001 ---> Using cache ---> f02f77c7a4fc Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 52dc7ca6bb8e Removing intermediate container 160c7bb23e00 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in edb30b66a5ec ---> dd75c525f51e Removing intermediate container edb30b66a5ec Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-api" '' ---> Running in eb15b51c1a49 ---> 114a58f8f8db Removing intermediate container eb15b51c1a49 Successfully built 114a58f8f8db Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:27 ---> 9110ae7f579f Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/7 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> ac806f8eae52 Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> e31eeb9c22c5 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> ecb35f794669 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 75234ac7a2ae Successfully built 75234ac7a2ae Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/5 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 7b90d68258cd Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "vm-killer" '' ---> Using cache ---> c80f94883e99 Successfully built c80f94883e99 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 4817bb6590f8 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> b8b166db2544 Step 3/7 : ENV container docker ---> Using cache ---> 8b120f56086f Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 61851ac93c11 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> ada85930060d Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 6f2ffb0e7aed Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "registry-disk-v1alpha" '' ---> Using cache ---> 614f377d6b82 Successfully built 614f377d6b82 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33625/kubevirt/registry-disk-v1alpha:devel ---> 614f377d6b82 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 70dd46d85a1a Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 8fcb32ae16d3 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 86778b6bd05d Successfully built 86778b6bd05d Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33625/kubevirt/registry-disk-v1alpha:devel ---> 614f377d6b82 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d47b41552e17 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 81dd1759f96f Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> c6b5f56b50a2 Successfully built c6b5f56b50a2 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33625/kubevirt/registry-disk-v1alpha:devel ---> 614f377d6b82 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d47b41552e17 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> f894b620e679 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 465a21c6820e Successfully built 465a21c6820e Sending build context to Docker daemon 34.04 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 62cf8151a5f3 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 7df4da9e1b5d Step 5/8 : USER 1001 ---> Using cache ---> 3ee421ac4ad4 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> b23c2de9c3a8 Removing intermediate container a1abecc24cec Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 8fc25d8cdf78 ---> 5a2591c8a7d4 Removing intermediate container 8fc25d8cdf78 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "subresource-access-test" '' ---> Running in 9c142eb44e2d ---> 8759496a910a Removing intermediate container 9c142eb44e2d Successfully built 8759496a910a Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/9 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 7ff1a45e3635 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> a05ebaed4a0f Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> cd8398be9593 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 71c7ecd55e24 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 9689e3184427 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "winrmcli" '' ---> Using cache ---> 136d6a97642d Successfully built 136d6a97642d Sending build context to Docker daemon 34.49 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 9b7de4d715cd Removing intermediate container 834177e053be Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 927fefbabdd1 ---> 504d1af52074 Removing intermediate container 927fefbabdd1 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Running in 481a5b3d7c76 ---> 667e510fbb89 Removing intermediate container 481a5b3d7c76 Successfully built 667e510fbb89 hack/build-docker.sh push The push refers to a repository [localhost:33625/kubevirt/virt-controller] eab2656b096e: Preparing c0d2c4546d78: Preparing 39bae602f753: Preparing c0d2c4546d78: Pushed eab2656b096e: Pushed 39bae602f753: Pushed devel: digest: sha256:fb0994450657f5c87c46327a5d17a69f7006070cb0113b3239b49e9265c7e397 size: 948 The push refers to a repository [localhost:33625/kubevirt/virt-launcher] 3f93513af505: Preparing 5f9a0b8533a5: Preparing e08e9c1e1c7d: Preparing a1df4f757509: Preparing 32f4f03e38df: Preparing fa30d8d5eeb1: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing a1359dc556dd: Preparing 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing 530cc55618cd: Waiting 34fa414dfdf6: Waiting a1359dc556dd: Waiting 490c7c373332: Waiting 4b440db36f72: Waiting fa30d8d5eeb1: Waiting 39bae602f753: Waiting 5f9a0b8533a5: Pushed 3f93513af505: Pushed a1df4f757509: Pushed 530cc55618cd: Pushed 34fa414dfdf6: Pushed 490c7c373332: Pushed a1359dc556dd: Pushed 39bae602f753: Mounted from kubevirt/virt-controller e08e9c1e1c7d: Pushed 32f4f03e38df: Pushed fa30d8d5eeb1: Pushed 4b440db36f72: Pushed devel: digest: sha256:d81f08c92befb6e4c59c62546a9b0e0f734f22626c00a83c8b86523058782fb5 size: 2828 The push refers to a repository [localhost:33625/kubevirt/virt-handler] f233f8ec3faa: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher f233f8ec3faa: Pushed devel: digest: sha256:3990a6544f5443a85490612cb72f524d78a618f0f28a43ec28e715cb953b09b1 size: 741 The push refers to a repository [localhost:33625/kubevirt/virt-api] 262330d05fa1: Preparing ae4970287372: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler ae4970287372: Pushed 262330d05fa1: Pushed devel: digest: sha256:a1b16f07cf0fac1499e651c698b0df4cc16b0cbb0277a909962e0da385ec28e8 size: 948 The push refers to a repository [localhost:33625/kubevirt/disks-images-provider] 5c28b30e6fcd: Preparing 153871b39e50: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api 5c28b30e6fcd: Pushed 153871b39e50: Pushed devel: digest: sha256:597ce36683b59367e6f2c3a0228ff4ef63a65af7579fda6565bed912c7d9b5eb size: 948 The push refers to a repository [localhost:33625/kubevirt/vm-killer] e3afff5758ce: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/disks-images-provider e3afff5758ce: Pushed devel: digest: sha256:3d6b3fd3a21c488afadb86384f2a34f52fcfe18bc666ab626a3253e00f2495ca size: 740 The push refers to a repository [localhost:33625/kubevirt/registry-disk-v1alpha] 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 376d512574a4: Pushed 7971c2f81ae9: Pushed e7752b410e4c: Pushed devel: digest: sha256:f67e713e9e4bfb0896927dc790eb57bc7d3ccf53bbb869d43be798317dcaa0e6 size: 948 The push refers to a repository [localhost:33625/kubevirt/cirros-registry-disk-demo] 10c5abc306ca: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing e7752b410e4c: Mounted from kubevirt/registry-disk-v1alpha 376d512574a4: Mounted from kubevirt/registry-disk-v1alpha 7971c2f81ae9: Mounted from kubevirt/registry-disk-v1alpha 10c5abc306ca: Pushed devel: digest: sha256:cdf0ed852603efb9478304842fa376adcc72c8e5905c93bd5b2f02b125999886 size: 1160 The push refers to a repository [localhost:33625/kubevirt/fedora-cloud-registry-disk-demo] 1f5262e9b254: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 7971c2f81ae9: Mounted from kubevirt/cirros-registry-disk-demo e7752b410e4c: Mounted from kubevirt/cirros-registry-disk-demo 376d512574a4: Mounted from kubevirt/cirros-registry-disk-demo 1f5262e9b254: Pushed devel: digest: sha256:39b402633f237eede42e1265fe334d5f604ebbac14774a348cf6aa6c6e90f4d1 size: 1161 The push refers to a repository [localhost:33625/kubevirt/alpine-registry-disk-demo] 7f4da5a86514: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 7971c2f81ae9: Mounted from kubevirt/fedora-cloud-registry-disk-demo 376d512574a4: Mounted from kubevirt/fedora-cloud-registry-disk-demo e7752b410e4c: Mounted from kubevirt/fedora-cloud-registry-disk-demo 7f4da5a86514: Pushed devel: digest: sha256:092d68a17a3915e8f2a0a6160adf93193b17cf20c51819ae5cd296df56eba1fb size: 1160 The push refers to a repository [localhost:33625/kubevirt/subresource-access-test] 0b72b89dbf10: Preparing 2aaca144a3e2: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer 2aaca144a3e2: Pushed 0b72b89dbf10: Pushed devel: digest: sha256:297e2c20fba71047693844e945a77569e42d4150361be7762d65cd8d5271b067 size: 948 The push refers to a repository [localhost:33625/kubevirt/winrmcli] 3cd438b33e81: Preparing 8519683f2557: Preparing a29ba32ac0a1: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/subresource-access-test 3cd438b33e81: Pushed a29ba32ac0a1: Pushed 8519683f2557: Pushed devel: digest: sha256:39e1eb630a438319a385a4cf6a7a1679026ef686f1856cd31f7e90e3802cc9a7 size: 1165 The push refers to a repository [localhost:33625/kubevirt/example-hook-sidecar] 2c342bf9fc69: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/winrmcli 2c342bf9fc69: Pushed devel: digest: sha256:73495d13f84910881d1874d5b986aa0015cb2ca3abe6367ec24c4533ce3b1d1d size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-44-g3d60f85 ++ KUBEVIRT_VERSION=v0.7.0-44-g3d60f85 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33625/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ wc -l ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-44-g3d60f85 ++ KUBEVIRT_VERSION=v0.7.0-44-g3d60f85 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33625/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-release ]] + [[ k8s-1.10.3-release =~ .*-dev ]] + [[ k8s-1.10.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created service "virt-api" created deployment.extensions "virt-api" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n 'virt-api-7d79764579-6qzjf 0/1 ContainerCreating 0 3s virt-api-7d79764579-kg9f4 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-cwmlx 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-jfn9j 0/1 ContainerCreating 0 3s virt-handler-6fg4q 0/1 ContainerCreating 0 3s virt-handler-7fhhd 0/1 ContainerCreating 0 3s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running virt-api-7d79764579-6qzjf 0/1 ContainerCreating 0 3s virt-api-7d79764579-kg9f4 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-cwmlx 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-jfn9j 0/1 ContainerCreating 0 3s virt-handler-6fg4q 0/1 ContainerCreating 0 3s virt-handler-7fhhd 0/1 ContainerCreating 0 3s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n false ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-lc9kh 1/1 Running 0 1m disks-images-provider-q2mml 1/1 Running 0 1m etcd-node01 1/1 Running 0 10m kube-apiserver-node01 1/1 Running 0 10m kube-controller-manager-node01 1/1 Running 0 10m kube-dns-86f4d74b45-gr4jd 3/3 Running 0 11m kube-flannel-ds-g79sl 1/1 Running 0 10m kube-flannel-ds-wgr4k 1/1 Running 1 11m kube-proxy-d5xkn 1/1 Running 0 11m kube-proxy-xw78q 1/1 Running 0 10m kube-scheduler-node01 1/1 Running 0 10m virt-api-7d79764579-6qzjf 1/1 Running 0 1m virt-api-7d79764579-kg9f4 1/1 Running 0 1m virt-controller-7d57d96b65-cwmlx 1/1 Running 0 1m virt-controller-7d57d96b65-jfn9j 1/1 Running 0 1m virt-handler-6fg4q 1/1 Running 0 1m virt-handler-7fhhd 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release@2/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:7cd3760dedb673e0a3082a5eb3582d4cb71738af3f0ee9303e695bd12b9a4ee4 go version go1.10 linux/amd64 Waiting for rsyncd to be ready. go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1531735832 Will run 140 of 140 specs S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.006 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ • [SLOW TEST:34.086 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:19.033 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:32.188 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ volumedisk0 compute • [SLOW TEST:90.747 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 ------------------------------ • [SLOW TEST:34.342 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [2.399 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:160 ------------------------------ • ------------------------------ • [SLOW TEST:85.130 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:277 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:278 ------------------------------ • [SLOW TEST:80.018 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:305 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:306 ------------------------------ • [SLOW TEST:50.197 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:326 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:349 ------------------------------ • ------------------------------ • [SLOW TEST:50.096 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:49.221 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:105.668 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:140.171 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:51.803 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 ------------------------------ • [SLOW TEST:52.490 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 ------------------------------ • [SLOW TEST:49.115 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 ------------------------------ • [SLOW TEST:103.880 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 ------------------------------ • [SLOW TEST:129.659 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 ------------------------------ • ------------------------------ • [SLOW TEST:50.277 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 ------------------------------ • [SLOW TEST:55.471 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 ------------------------------ • [SLOW TEST:51.081 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 ------------------------------ ••••• ------------------------------ • [SLOW TEST:7.745 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:20.242 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ • [SLOW TEST:5.511 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove VMIs once it is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:169 ------------------------------ • ------------------------------ • [SLOW TEST:5.508 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 ------------------------------ • ------------------------------ • [SLOW TEST:15.684 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ •• ------------------------------ • [SLOW TEST:35.505 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ • [SLOW TEST:120.579 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••• ------------------------------ • [SLOW TEST:5.079 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on the same node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.217 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••••• ------------------------------ • [SLOW TEST:60.226 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:379 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:380 ------------------------------ • ------------------------------ • [SLOW TEST:56.380 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:413 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:414 ------------------------------ • [SLOW TEST:53.630 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address in non-conventional format /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:425 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:426 ------------------------------ • [SLOW TEST:57.709 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address and slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:438 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:439 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachineinstance testvmi4ggxf • [SLOW TEST:61.955 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service node-port-vm successfully exposed for virtualmachineinstance testvmi4ggxf • [SLOW TEST:10.324 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:98 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:103 ------------------------------ Service cluster-ip-udp-vm successfully exposed for virtualmachineinstance testvmic899f • [SLOW TEST:65.400 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:147 Should expose a ClusterIP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:151 ------------------------------ Service node-port-udp-vm successfully exposed for virtualmachineinstance testvmic899f Pod name: disks-images-provider-lc9kh Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-q2mml Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-6qzjf Pod phase: Running level=info timestamp=2018-07-16T10:43:00.188618Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/openapi/v2 proto=HTTP/2.0 statusCode=404 contentLength=19 level=info timestamp=2018-07-16T10:43:00.189834Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/swagger.json proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/16 10:43:04 http: TLS handshake error from 10.244.1.1:37750: EOF 2018/07/16 10:43:14 http: TLS handshake error from 10.244.1.1:37756: EOF level=info timestamp=2018-07-16T10:43:22.535243Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-16T10:43:23.573825Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-16T10:43:24.009702Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/16 10:43:24 http: TLS handshake error from 10.244.1.1:37762: EOF 2018/07/16 10:43:34 http: TLS handshake error from 10.244.1.1:37768: EOF 2018/07/16 10:43:44 http: TLS handshake error from 10.244.1.1:37774: EOF level=info timestamp=2018-07-16T10:43:52.593849Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-16T10:43:53.602814Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-16T10:43:53.996046Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/16 10:43:54 http: TLS handshake error from 10.244.1.1:37780: EOF 2018/07/16 10:44:04 http: TLS handshake error from 10.244.1.1:37786: EOF Pod name: virt-api-7d79764579-kg9f4 Pod phase: Running 2018/07/16 10:41:48 http: TLS handshake error from 10.244.0.1:60932: EOF 2018/07/16 10:41:58 http: TLS handshake error from 10.244.0.1:60958: EOF 2018/07/16 10:42:08 http: TLS handshake error from 10.244.0.1:60982: EOF 2018/07/16 10:42:18 http: TLS handshake error from 10.244.0.1:32780: EOF 2018/07/16 10:42:28 http: TLS handshake error from 10.244.0.1:32804: EOF 2018/07/16 10:42:38 http: TLS handshake error from 10.244.0.1:32828: EOF 2018/07/16 10:42:48 http: TLS handshake error from 10.244.0.1:32852: EOF 2018/07/16 10:42:58 http: TLS handshake error from 10.244.0.1:32876: EOF 2018/07/16 10:43:08 http: TLS handshake error from 10.244.0.1:32900: EOF 2018/07/16 10:43:18 http: TLS handshake error from 10.244.0.1:32924: EOF 2018/07/16 10:43:28 http: TLS handshake error from 10.244.0.1:32948: EOF 2018/07/16 10:43:38 http: TLS handshake error from 10.244.0.1:32972: EOF 2018/07/16 10:43:48 http: TLS handshake error from 10.244.0.1:32996: EOF 2018/07/16 10:43:58 http: TLS handshake error from 10.244.0.1:33020: EOF 2018/07/16 10:44:08 http: TLS handshake error from 10.244.0.1:33044: EOF Pod name: virt-controller-7d57d96b65-cwmlx Pod phase: Running level=info timestamp=2018-07-16T10:34:26.502598Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi2fwp5\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi2fwp5" level=info timestamp=2018-07-16T10:34:26.682560Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmisbxzd\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmisbxzd" level=info timestamp=2018-07-16T10:34:27.282742Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi26tmn\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi26tmn" level=info timestamp=2018-07-16T10:36:59.756226Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi9lm8d kind= uid=2b6f4493-88e4-11e8-bde3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-16T10:36:59.758907Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi9lm8d kind= uid=2b6f4493-88e4-11e8-bde3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-16T10:38:00.794654Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihjfg5 kind= uid=4fd16eec-88e4-11e8-bde3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-16T10:38:00.794785Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmihjfg5 kind= uid=4fd16eec-88e4-11e8-bde3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-16T10:38:57.186718Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi9dh4b kind= uid=716c60ee-88e4-11e8-bde3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-16T10:38:57.187791Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi9dh4b kind= uid=716c60ee-88e4-11e8-bde3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-16T10:39:50.808223Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-16T10:39:50.809650Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-16T10:40:48.516544Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4ggxf kind= uid=b3c97ebb-88e4-11e8-bde3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-16T10:40:48.516670Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi4ggxf kind= uid=b3c97ebb-88e4-11e8-bde3-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-16T10:42:00.795565Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-16T10:42:00.800611Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Marking VirtualMachineInstance as initialized" Pod name: virt-controller-7d57d96b65-wtjpv Pod phase: Running level=info timestamp=2018-07-16T10:33:54.336796Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-handler-6fg4q Pod phase: Running level=info timestamp=2018-07-16T10:41:05.029183Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-16T10:41:05.031927Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4ggxf kind= uid=b3c97ebb-88e4-11e8-bde3-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-16T10:41:05.032503Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmi4ggxf kind= uid=b3c97ebb-88e4-11e8-bde3-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-16T10:41:05.046165Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi4ggxf kind= uid=b3c97ebb-88e4-11e8-bde3-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-16T10:42:16.203164Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-16T10:42:16.767185Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-16T10:42:16.767634Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmic899f kind=Domain uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Domain is in state Paused reason Unknown" level=info timestamp=2018-07-16T10:42:17.742741Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-16T10:42:17.750644Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmic899f kind=Domain uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-16T10:42:17.752648Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-16T10:42:17.753413Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="No update processing required" level=info timestamp=2018-07-16T10:42:17.766313Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-16T10:42:17.781434Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-16T10:42:17.781579Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-16T10:42:17.788761Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-handler-7fhhd Pod phase: Running level=info timestamp=2018-07-16T10:38:16.889449Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmihjfg5 kind= uid=4fd16eec-88e4-11e8-bde3-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-16T10:38:16.902376Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmihjfg5 kind= uid=4fd16eec-88e4-11e8-bde3-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-16T10:40:07.254362Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-16T10:40:07.797031Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-16T10:40:07.797222Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvminvffg kind=Domain uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Domain is in state Paused reason Unknown" level=info timestamp=2018-07-16T10:40:08.812427Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-16T10:40:08.812596Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvminvffg kind=Domain uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-16T10:40:08.813843Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-16T10:40:08.813901Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="No update processing required" level=info timestamp=2018-07-16T10:40:08.827637Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-16T10:40:08.827789Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-16T10:40:08.843703Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-16T10:40:08.845880Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-16T10:40:08.845983Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-16T10:40:08.851020Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Synchronization loop succeeded." Pod name: netcat2hzg6 Pod phase: Running ++ head -n 1 +++ nc -ul 31016 +++ nc -up 31016 192.168.66.102 31017 -i 1 -w 1 +++ echo Pod name: netcat5dn8q Pod phase: Failed ++ head -n 1 +++ nc wrongservice.kubevirt-test-default 1500 -i 1 -w 1 Ncat: Could not resolve hostname "wrongservice.kubevirt-test-default": Name or service not known. QUITTING. + x= + echo '' + '[' '' = 'Hello World!' ']' + echo failed + exit 1 failed Pod name: netcat7jl25 Pod phase: Succeeded ++ head -n 1 +++ nc -ul 31016 +++ echo +++ nc -up 31016 192.168.66.101 31017 -i 1 -w 1 Hello UDP World! succeeded + x='Hello UDP World!' + echo 'Hello UDP World!' + '[' 'Hello UDP World!' = 'Hello UDP World!' ']' + echo succeeded + exit 0 Pod name: netcatfnd2s Pod phase: Succeeded ++ head -n 1 +++ nc 192.168.66.101 30017 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatjr97j Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.50 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatmrppd Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.50 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatnww6t Pod phase: Succeeded ++ head -n 1 +++ echo +++ nc -ul 29016 +++ nc -up 29016 10.100.31.130 29017 -i 1 -w 1 + x='Hello UDP World!' + echo 'Hello UDP World!' + '[' 'Hello UDP World!' = 'Hello UDP World!' ']' + echo succeeded + exit 0 Hello UDP World! succeeded Pod name: netcatpcp5g Pod phase: Succeeded ++ head -n 1 +++ nc my-subdomain.myvmi.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatpvzfs Pod phase: Succeeded ++ head -n 1 +++ nc 10.96.52.39 27017 -i 1 -w 1 Hello World! succeeded + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Pod name: netcatrcbqj Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.50 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatrfg6t Pod phase: Succeeded ++ head -n 1 +++ nc myservice.kubevirt-test-default 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcattwq5g Pod phase: Succeeded ++ head -n 1 +++ nc 192.168.66.102 30017 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatxm9ck Pod phase: Succeeded ++ head -n 1 +++ nc 10.244.1.50 1500 -i 1 -w 1 + x='Hello World!' + echo 'Hello World!' + '[' 'Hello World!' = 'Hello World!' ']' + echo succeeded + exit 0 Hello World! succeeded Pod name: netcatzgtnd Pod phase: Succeeded ++ head -n 1 +++ nc -ul 28016 +++ echo +++ nc -up 28016 10.104.16.93 28017 -i 1 -w 1 Hello UDP World! succeeded + x='Hello UDP World!' + echo 'Hello UDP World!' + '[' 'Hello UDP World!' = 'Hello UDP World!' ']' + echo succeeded + exit 0 Pod name: virt-launcher-testvmi26tmn-fd7wb Pod phase: Running level=info timestamp=2018-07-16T10:34:50.427415Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID edcb535b-1bce-40ca-bb04-f881af4a7dac" level=info timestamp=2018-07-16T10:34:50.427926Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-16T10:34:50.428019Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:51.416889Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-16T10:34:51.445076Z pos=monitor.go:222 component=virt-launcher msg="Found PID for edcb535b-1bce-40ca-bb04-f881af4a7dac: 159" level=info timestamp=2018-07-16T10:34:51.462966Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi26tmn kind= uid=cfff33d0-88e3-11e8-bde3-525500d15501 msg="Domain started." level=info timestamp=2018-07-16T10:34:51.464566Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi26tmn kind= uid=cfff33d0-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:51.525590Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:34:51.527252Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi26tmn kind= uid=cfff33d0-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:51.537540Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:51.537675Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-16T10:34:51.603024Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:34:51.604380Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi26tmn kind= uid=cfff33d0-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:51.612047Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:51.615028Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi26tmn kind= uid=cfff33d0-88e3-11e8-bde3-525500d15501 msg="Synced vmi" Pod name: virt-launcher-testvmi2fwp5-jhhkq Pod phase: Running level=info timestamp=2018-07-16T10:34:45.975924Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-16T10:34:45.982643Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:45.993625Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 5d437955-68ce-4a28-a514-c3e9873d408a" level=info timestamp=2018-07-16T10:34:45.994732Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-16T10:34:46.696336Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-16T10:34:46.716753Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi2fwp5 kind= uid=cff75173-88e3-11e8-bde3-525500d15501 msg="Domain started." level=info timestamp=2018-07-16T10:34:46.725667Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi2fwp5 kind= uid=cff75173-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:46.726508Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:34:46.729790Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:46.729860Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-16T10:34:46.770242Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:34:46.772106Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi2fwp5 kind= uid=cff75173-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:46.778128Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi2fwp5 kind= uid=cff75173-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:46.778259Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:47.003897Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 5d437955-68ce-4a28-a514-c3e9873d408a: 151" Pod name: virt-launcher-testvmi4ggxf-86wcp Pod phase: Running level=info timestamp=2018-07-16T10:41:04.056951Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-16T10:41:04.478467Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-16T10:41:04.484114Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:41:04.940565Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 31eb88f1-b17e-4299-a8e5-50d5db6e1455" level=info timestamp=2018-07-16T10:41:04.941248Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-16T10:41:04.956818Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-16T10:41:04.977229Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi4ggxf kind= uid=b3c97ebb-88e4-11e8-bde3-525500d15501 msg="Domain started." level=info timestamp=2018-07-16T10:41:04.979685Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4ggxf kind= uid=b3c97ebb-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:41:04.980712Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:41:04.983570Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:41:04.983667Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-16T10:41:05.021723Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:41:05.036629Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4ggxf kind= uid=b3c97ebb-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:41:05.071121Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:41:05.949009Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 31eb88f1-b17e-4299-a8e5-50d5db6e1455: 141" Pod name: virt-launcher-testvmi4xtl9-mqtwp Pod phase: Running level=info timestamp=2018-07-16T10:34:41.306098Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-16T10:34:41.731971Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-16T10:34:41.737622Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID c798d3bf-fad5-49d8-9fc2-2d6c8f2c0dc9" level=info timestamp=2018-07-16T10:34:41.737850Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-16T10:34:41.740053Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:42.102506Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-16T10:34:42.118873Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi4xtl9 kind= uid=cff8ebb8-88e3-11e8-bde3-525500d15501 msg="Domain started." level=info timestamp=2018-07-16T10:34:42.122224Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4xtl9 kind= uid=cff8ebb8-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:42.123970Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:34:42.126592Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:42.126676Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-16T10:34:42.161679Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:34:42.169938Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:42.200284Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi4xtl9 kind= uid=cff8ebb8-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:42.745079Z pos=monitor.go:222 component=virt-launcher msg="Found PID for c798d3bf-fad5-49d8-9fc2-2d6c8f2c0dc9: 145" Pod name: virt-launcher-testvmi9dh4b-qxsw9 Pod phase: Running level=info timestamp=2018-07-16T10:39:10.876115Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-16T10:39:11.297481Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-16T10:39:11.303717Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:39:11.819798Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 925b7b1e-2234-4d82-a084-c444ffa8dcb6" level=info timestamp=2018-07-16T10:39:11.820297Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-16T10:39:12.506594Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-16T10:39:12.550655Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi9dh4b kind= uid=716c60ee-88e4-11e8-bde3-525500d15501 msg="Domain started." level=info timestamp=2018-07-16T10:39:12.551970Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:39:12.555575Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:39:12.555668Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-16T10:39:12.557265Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi9dh4b kind= uid=716c60ee-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:39:12.573948Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:39:12.578118Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:39:12.596808Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi9dh4b kind= uid=716c60ee-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:39:12.825151Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 925b7b1e-2234-4d82-a084-c444ffa8dcb6: 138" Pod name: virt-launcher-testvmi9lm8d-7flwt Pod phase: Running level=info timestamp=2018-07-16T10:37:20.302958Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-16T10:37:20.309716Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:37:20.802557Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID cecb7f81-c9c7-497d-8e7e-153b580f4514" level=info timestamp=2018-07-16T10:37:20.803025Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-16T10:37:21.625776Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-16T10:37:21.691989Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmi9lm8d kind= uid=2b6f4493-88e4-11e8-bde3-525500d15501 msg="Domain started." level=info timestamp=2018-07-16T10:37:21.695242Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi9lm8d kind= uid=2b6f4493-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:37:21.695921Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:37:21.699446Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:37:21.699506Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-16T10:37:21.729732Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:37:21.731023Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi9lm8d kind= uid=2b6f4493-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:37:21.748050Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:37:21.748920Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmi9lm8d kind= uid=2b6f4493-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:37:21.811021Z pos=monitor.go:222 component=virt-launcher msg="Found PID for cecb7f81-c9c7-497d-8e7e-153b580f4514: 152" Pod name: virt-launcher-testvmic899f-4fvmp Pod phase: Running level=info timestamp=2018-07-16T10:42:16.295617Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-16T10:42:16.762432Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-16T10:42:16.769724Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:42:17.047219Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID cd949753-b283-41a8-8784-3b003756b323" level=info timestamp=2018-07-16T10:42:17.047851Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-16T10:42:17.699625Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-16T10:42:17.717395Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Domain started." level=info timestamp=2018-07-16T10:42:17.720610Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:42:17.721529Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:42:17.743167Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:42:17.743317Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-16T10:42:17.764449Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:42:17.766947Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:42:17.785613Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmic899f kind= uid=dede5477-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:42:18.057742Z pos=monitor.go:222 component=virt-launcher msg="Found PID for cd949753-b283-41a8-8784-3b003756b323: 146" Pod name: virt-launcher-testvmihjfg5-d4hp4 Pod phase: Running level=info timestamp=2018-07-16T10:38:16.416187Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-16T10:38:16.428480Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:38:16.550394Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 227c532d-d6b0-4743-955e-aeb5beea79b6" level=info timestamp=2018-07-16T10:38:16.550738Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-16T10:38:16.829481Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-16T10:38:16.831425Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmihjfg5 kind= uid=4fd16eec-88e4-11e8-bde3-525500d15501 msg="Domain started." level=info timestamp=2018-07-16T10:38:16.847884Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihjfg5 kind= uid=4fd16eec-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:38:16.850168Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:38:16.853540Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:38:16.853609Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-16T10:38:16.874207Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:38:16.879091Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:38:16.882350Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihjfg5 kind= uid=4fd16eec-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:38:16.902056Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmihjfg5 kind= uid=4fd16eec-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:38:17.555203Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 227c532d-d6b0-4743-955e-aeb5beea79b6: 139" Pod name: virt-launcher-testvminvffg-27vdp Pod phase: Running level=info timestamp=2018-07-16T10:40:08.808378Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:40:08.811025Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:40:08.815779Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:40:08.815882Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-16T10:40:08.828710Z pos=converter.go:513 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-07-16T10:40:08.831782Z pos=converter.go:729 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-16T10:40:08.831823Z pos=converter.go:730 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-16T10:40:08.842525Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:40:08.845587Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:40:08.845695Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:40:08.847259Z pos=converter.go:513 component=virt-launcher msg="The network interface type of default was changed to e1000 due to unsupported interface type by qemu slirp network" level=info timestamp=2018-07-16T10:40:08.847619Z pos=converter.go:729 component=virt-launcher msg="Found nameservers in /etc/resolv.conf: \n`\u0000\n" level=info timestamp=2018-07-16T10:40:08.847640Z pos=converter.go:730 component=virt-launcher msg="Found search domains in /etc/resolv.conf: kubevirt-test-default.svc.cluster.local svc.cluster.local cluster.local" level=info timestamp=2018-07-16T10:40:08.850792Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvminvffg kind= uid=9163bb25-88e4-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:40:09.692348Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 24b02c83-df90-426b-9f69-d3b7fbcc8d70: 140" Pod name: virt-launcher-testvmisbxzd-wqp56 Pod phase: Running level=info timestamp=2018-07-16T10:34:46.970354Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-16T10:34:46.979082Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:47.210260Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 1326c04d-60b1-4072-a895-1d97b7ff12a5" level=info timestamp=2018-07-16T10:34:47.210559Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-16T10:34:47.551904Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-16T10:34:47.675852Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmisbxzd kind= uid=cffaebcb-88e3-11e8-bde3-525500d15501 msg="Domain started." level=info timestamp=2018-07-16T10:34:47.677754Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:34:47.708902Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisbxzd kind= uid=cffaebcb-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:47.709737Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:47.709830Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-16T10:34:47.754442Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-16T10:34:47.756813Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisbxzd kind= uid=cffaebcb-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:47.777351Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-16T10:34:47.786807Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmisbxzd kind= uid=cffaebcb-88e3-11e8-bde3-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-16T10:34:48.218063Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 1326c04d-60b1-4072-a895-1d97b7ff12a5: 153" • Failure [69.969 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:179 Should expose a NodePort service on a VM and connect to it [It] /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:184 Timed out after 60.008s. Expected : Running to equal : Succeeded /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:221 ------------------------------ STEP: Exposing the service via virtctl command STEP: Getting back the cluster IP given for the service STEP: Starting a pod which tries to reach the VM via ClusterIP STEP: Getting the node IP from all nodes STEP: Starting a pod which tries to reach the VM via NodePort STEP: Waiting for the pod to report a successful connection attempt STEP: Starting a pod which tries to reach the VM via NodePort STEP: Waiting for the pod to report a successful connection attempt Service cluster-ip-vmrs successfully exposed for vmirs replicasetvz2tz • [SLOW TEST:74.909 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:227 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:260 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:264 ------------------------------ Service cluster-ip-ovm successfully exposed for virtualmachine testvmibbnz6 • [SLOW TEST:72.628 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an Offline VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:292 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:336 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:337 ------------------------------ • [SLOW TEST:7.105 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should success /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:71 ------------------------------ • [SLOW TEST:15.436 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:76 ------------------------------ • [SLOW TEST:17.578 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:82 ------------------------------ •••• ------------------------------ • [SLOW TEST:52.551 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:27.938 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:15.741 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:203 ------------------------------ • [SLOW TEST:16.206 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:233 ------------------------------ • [SLOW TEST:41.296 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:281 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:282 ------------------------------ • [SLOW TEST:22.563 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:304 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:305 ------------------------------ • ------------------------------ • [SLOW TEST:95.541 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:366 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:405 ------------------------------ S [SKIPPING] [0.073 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:458 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:463 ------------------------------ S [SKIPPING] [0.063 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:458 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:463 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.059 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:519 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:531 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:527 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.057 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:519 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:568 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:527 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.056 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:519 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:612 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:527 ------------------------------ •••• ------------------------------ • [SLOW TEST:17.907 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:764 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:765 ------------------------------ • [SLOW TEST:34.001 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:796 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:797 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:798 ------------------------------ • [SLOW TEST:20.976 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:796 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:821 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:822 ------------------------------ • [SLOW TEST:29.987 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:873 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:874 ------------------------------ • [SLOW TEST:24.312 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:873 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:901 ------------------------------ • [SLOW TEST:62.955 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 ------------------------------ • [SLOW TEST:52.417 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 ------------------------------ • [SLOW TEST:165.614 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 ------------------------------ • [SLOW TEST:58.420 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:118 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:119 ------------------------------ • [SLOW TEST:49.866 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:162 ------------------------------ •• ------------------------------ • [SLOW TEST:15.148 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ •• ------------------------------ • [SLOW TEST:41.499 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:75.917 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:23.362 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ • [SLOW TEST:320.940 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 ------------------------------ • [SLOW TEST:74.464 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ • [SLOW TEST:225.510 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 ------------------------------ • [SLOW TEST:16.311 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ • [SLOW TEST:43.461 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ • [SLOW TEST:6.922 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.887 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.697 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.838 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••••••••••• ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 should succeed to generate a VM JSON file using oc-process command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:150 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 should succeed to create a VM using oc-create command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:156 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 should succeed to launch a VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:161 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 with given VMI from the VM /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:163 should succeed to terminate the VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:166 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ • [SLOW TEST:150.542 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:14.586 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:30.774 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • [SLOW TEST:122.797 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 1 Failure: [Fail] Expose Expose UDP service on a VM Expose NodePort UDP service [It] Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:221 Ran 124 of 140 Specs in 4225.715 seconds FAIL! -- 123 Passed | 1 Failed | 0 Pending | 16 Skipped --- FAIL: TestTests (4225.74s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh