+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/06/29 12:54:06 Waiting for host: 192.168.66.101:22 2018/06/29 12:54:09 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/29 12:54:17 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/29 12:54:22 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 24.004216 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:68e1dd7cc16d12a35203fd182429e7f853b457b94084ad7ac18e8719ed24f5e8 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/06/29 12:55:06 Waiting for host: 192.168.66.102:22 2018/06/29 12:55:09 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/29 12:55:17 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/06/29 12:55:22 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 33s v1.10.3 node02 Ready 10s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 34s v1.10.3 node02 Ready 11s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:32970/kubevirt/virt-controller:devel Untagged: localhost:32970/kubevirt/virt-controller@sha256:266a0fe9248bd3b26b7003843ac5019ae8dfc300ee6554b9a86cf94cae64da2a Deleted: sha256:b8b511e36269e8d056d2d7d896d00c99e04d637891cb2eacd74925e0aff45076 Deleted: sha256:39974eaafe7ec6c202ea1d099ed6ee712be7c64acdfc7ef6dfa200ec5c85024f Deleted: sha256:a90a0c05581f393e67d86f3456a165ad0392f89322a38145112ee34dd417f594 Deleted: sha256:c2a42fd1a5f607ce452ff89874eea77701bd3773b613f1302f0e28842ccf1cc6 Untagged: localhost:32970/kubevirt/virt-launcher:devel Untagged: localhost:32970/kubevirt/virt-launcher@sha256:a660dda0b5412d1b37926bd6b17f659228c472fc4a4e7cb46591a0ce60401269 Deleted: sha256:2861608f327e54815566d280b2541728e5c94bdf3057ca8ad17036448ab5c7e0 Deleted: sha256:fe3970c51fc0382c815cbd5db326fffd3a960db16b8dd824d16ebadf59657248 Deleted: sha256:a52e3cca833193b1e165638309d3464a7b4148cc43bf458beb69353c8cbf49e5 Deleted: sha256:55ec08693e14062f49c54f5185b3182089f407c411aac85564c556a98418ac75 Deleted: sha256:b9755606661dc593a3a70722522dd7f2759d7d8b7904dacae77d2590db39db13 Deleted: sha256:66d30855390f45785fc2d1ee3eff59e52fafccf35e1c629d7cb003e475159de7 Deleted: sha256:ec2845e1a586e2be08bdc4e3553d64805a6d1d41d0048d1b7614a700b4ea2b28 Deleted: sha256:3cb62d8928be3f1e77ba1c847365dad59e67dcb2a527c2f1d474e566716d34b8 Deleted: sha256:02d0a7469a6bcdcb98dacbc6f9f4bb815e11f3109c9852add68d98b88d53ea36 Deleted: sha256:125c98b4306ac8cade038b99dfb7fb27e65c377746d5b4b83ea492469ffa6e34 Deleted: sha256:c340d2cb6b6815f585de898a3bd2604c6be9d76739509ba982993b2845eddbd1 Deleted: sha256:e5b99c78460e38f1b7b9c95328b8b6f59bf3f9e1444ba201d67d81651bbaf225 Deleted: sha256:f30c9dc4c30a1b489d0b52c27887088bf32fe70da2194884f0af7826194adc27 Deleted: sha256:71dca85be0ef2ff00f13cc4a5f0fce6a713bebca8648db02d294bed97574c601 Deleted: sha256:50d9b66450581fd8e350df67f26b81547bf9c809a92182aa738d129c0e324e27 Deleted: sha256:2f992fb7c4647b6b16498944ec44c64a49205d1075916edf63c76e060ba8e18e Deleted: sha256:2f4e88ffe5c9a4f35da2a633628af57b8ecfaf1f9386b7a6c4c9e40dd4415eae Deleted: sha256:f9487a89368c758931320f4c193a34b13d4139028560b06f8e807d5a0c132fdc Deleted: sha256:9859ca0dba9ccc60ee95ea91bb18e57c33eaedeefb346fecf16ede89c0a547b2 Deleted: sha256:7d313990585627b16e4279e5b901807bbcc74a9df90ec5136a164fa7eb5a27eb Deleted: sha256:10b00130fafa1a6d8bb6a2649e0d9c8c55435331ff04ea1b65d76a20ada9a62e Deleted: sha256:a0d42f95f44fb0324a7cc6cf0719ab88d11bd777a899e36e07fddd162aea6651 Untagged: localhost:32970/kubevirt/virt-handler:devel Untagged: localhost:32970/kubevirt/virt-handler@sha256:84c4da78c91fbe3d597b0b55613247b326670b0b9e2f7c4cb418c51808f905f6 Deleted: sha256:203eff7ea8849d79e1b98b63bd1f8f7770fa2bd0cf06b0925b44405f3611b5e3 Deleted: sha256:d93e95addcc4b04fb2e853053fc53f219712db184b45287cb6d991c670fb3b25 Deleted: sha256:f2d88795e1bb306aa1e81459d8f84df7635323535f8d8bbe782ae8a0aa27e9e4 Deleted: sha256:b1e2b7902473fc4209dd5b7340ba118245309927538832af9617cb928ba0e26f Untagged: localhost:32970/kubevirt/virt-api:devel Untagged: localhost:32970/kubevirt/virt-api@sha256:e164afae7788030915d8d605233b1adaae661031b541f2c9c3a716648889b370 Deleted: sha256:80d751a3d6e513c88a45cf407c8ab2246b83fc328496fcac18b29b55ddd2929d Deleted: sha256:aa72dbedd089bb9b4c4ca6872f7c783db6611f7fdaa85ce0db28beabd44872db Deleted: sha256:2ced92c3bca70a467dd1a64923623a76deb31b62cd915916d3c0029d533e0c4b Deleted: sha256:0f8f88383c081a12237fb172b774dcd88d5ecfb97bcc0f13f4460d14d386eeb7 Untagged: localhost:32970/kubevirt/subresource-access-test:devel Untagged: localhost:32970/kubevirt/subresource-access-test@sha256:a66532c0855d8933369221585c41fbcfbf9fed6cbc83894019ee981b4a90279b Deleted: sha256:fc08cb147978db29a6c2b9bbf9961a9464924350e1184c78e562e1b9589312f2 Deleted: sha256:2e7366505c7ebbea7905edced6adf6f1926447ea94361221dd384d2babd37edc Deleted: sha256:885f13da661ec23bf421b6d9c1cdb9032276a62bbe0cd6817efc818b9a5ef3bf Deleted: sha256:5876e42b918fac8e2ac54e9c89c1fd3c2680f67bd7259f14aa7970b33086babc sha256:2df8b30e8f619e28e75e00ea9fa42c63f4f14b1c34fbb1223214102337507863 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:2df8b30e8f619e28e75e00ea9fa42c63f4f14b1c34fbb1223214102337507863 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 36.22 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 45ed71cd684b Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> ba8171a31e93 Step 5/8 : USER 1001 ---> Using cache ---> 6bd535be1fa1 Step 6/8 : COPY virt-controller /virt-controller ---> fd3f75d916ee Removing intermediate container 623c8fc784d0 Step 7/8 : ENTRYPOINT /virt-controller ---> Running in 68e05f8860d1 ---> 06e3ca1e7c89 Removing intermediate container 68e05f8860d1 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "virt-controller" '' ---> Running in 8a5e0e24d118 ---> b5c11142b1a9 Removing intermediate container 8a5e0e24d118 Successfully built b5c11142b1a9 Sending build context to Docker daemon 38.15 MB Step 1/14 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/14 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3bbd31ef6597 Step 3/14 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> b24e583fa448 Step 4/14 : COPY sock-connector /sock-connector ---> Using cache ---> 25d0cc0414fc Step 5/14 : COPY sh.sh /sh.sh ---> Using cache ---> e9c9e73584e6 Step 6/14 : COPY virt-launcher /virt-launcher ---> 101cd539d720 Removing intermediate container 9986eaf44694 Step 7/14 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> e9af57cc6f37 Removing intermediate container 84a9f058708a Step 8/14 : RUN chmod 0640 /etc/sudoers.d/kubevirt ---> Running in 661e4fbf6e93  ---> 52a192af609b Removing intermediate container 661e4fbf6e93 Step 9/14 : RUN rm -f /libvirtd.sh ---> Running in 085cb51091af  ---> 241a15abda87 Removing intermediate container 085cb51091af Step 10/14 : COPY libvirtd.sh /libvirtd.sh ---> 0b2cd46bc642 Removing intermediate container b222d44a2c90 Step 11/14 : RUN chmod a+x /libvirtd.sh ---> Running in 91cdf47dc88b  ---> e1083f3cc6d2 Removing intermediate container 91cdf47dc88b Step 12/14 : COPY entrypoint.sh /entrypoint.sh ---> 89edd2e2f8e4 Removing intermediate container 58d55f9b16e0 Step 13/14 : ENTRYPOINT /entrypoint.sh ---> Running in 21cd8febb0ed ---> b4fd0d9cce94 Removing intermediate container 21cd8febb0ed Step 14/14 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "virt-launcher" '' ---> Running in 181caeb29784 ---> adf42e6fb829 Removing intermediate container 181caeb29784 Successfully built adf42e6fb829 Sending build context to Docker daemon 36.76 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/5 : COPY virt-handler /virt-handler ---> 1e0e34763577 Removing intermediate container 63d0922a3ba4 Step 4/5 : ENTRYPOINT /virt-handler ---> Running in 3dd8e7c494e4 ---> 767fa2d72ce3 Removing intermediate container 3dd8e7c494e4 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "virt-handler" '' ---> Running in de3b2125b0c9 ---> b571c11f871d Removing intermediate container de3b2125b0c9 Successfully built b571c11f871d Sending build context to Docker daemon 36.97 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 12e3c00eb78f Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> cfb92cbbf126 Step 5/8 : USER 1001 ---> Using cache ---> f02f77c7a4fc Step 6/8 : COPY virt-api /virt-api ---> cce7a4c8f0d3 Removing intermediate container 6b8945026225 Step 7/8 : ENTRYPOINT /virt-api ---> Running in 7b43c3fc4546 ---> 2b963de4cb2b Removing intermediate container 7b43c3fc4546 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "virt-api" '' ---> Running in 9b1e47574340 ---> 1eaa8cbfd963 Removing intermediate container 9b1e47574340 Successfully built 1eaa8cbfd963 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:27 ---> 9110ae7f579f Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/7 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> ac806f8eae52 Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> e31eeb9c22c5 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> ecb35f794669 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-dev1" '' ---> Using cache ---> 20201e8cc27e Successfully built 20201e8cc27e Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/5 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 7b90d68258cd Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "vm-killer" '' ---> Using cache ---> e43700219a3e Successfully built e43700219a3e Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 4817bb6590f8 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> b8b166db2544 Step 3/7 : ENV container docker ---> Using cache ---> 8b120f56086f Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 61851ac93c11 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> ada85930060d Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 6f2ffb0e7aed Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "registry-disk-v1alpha" '' ---> Using cache ---> 95b0938020e7 Successfully built 95b0938020e7 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33085/kubevirt/registry-disk-v1alpha:devel ---> 95b0938020e7 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> ca922b7619a7 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 5f1eca2c47d2 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev1" '' ---> Using cache ---> aa87e94238f6 Successfully built aa87e94238f6 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33085/kubevirt/registry-disk-v1alpha:devel ---> 95b0938020e7 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 935c07a8d40b Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 3bd8304376e8 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev1" '' ---> Using cache ---> 6d4f72ceb2cb Successfully built 6d4f72ceb2cb Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33085/kubevirt/registry-disk-v1alpha:devel ---> 95b0938020e7 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 935c07a8d40b Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> d40669029b91 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev1" '' ---> Using cache ---> 126ca640b16b Successfully built 126ca640b16b Sending build context to Docker daemon 34.03 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 62cf8151a5f3 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 7df4da9e1b5d Step 5/8 : USER 1001 ---> Using cache ---> 3ee421ac4ad4 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 405325476a59 Removing intermediate container a96837de4ff9 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in ffd1917e4713 ---> 126c415c90b0 Removing intermediate container ffd1917e4713 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "subresource-access-test" '' ---> Running in ef2016919e3b ---> 7967efb31756 Removing intermediate container ef2016919e3b Successfully built 7967efb31756 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/9 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 7ff1a45e3635 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> a05ebaed4a0f Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> cd8398be9593 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 71c7ecd55e24 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 9689e3184427 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "winrmcli" '' ---> Using cache ---> 9cc9275d8cc7 Successfully built 9cc9275d8cc7 hack/build-docker.sh push The push refers to a repository [localhost:33085/kubevirt/virt-controller] 6b690c8fd2c5: Preparing c0d2c4546d78: Preparing 39bae602f753: Preparing c0d2c4546d78: Pushed 6b690c8fd2c5: Pushed 39bae602f753: Pushed devel: digest: sha256:bbca752dc0d8fc22f522da7f57ab89fce53a3b85b58d7454a0d68f24851d7bfc size: 948 The push refers to a repository [localhost:33085/kubevirt/virt-launcher] 527c5cb77ece: Preparing 8af37c1615ab: Preparing 8af37c1615ab: Preparing 417898514f99: Preparing 1786e47e946e: Preparing 7472fc455915: Preparing 08d1c74eaf5f: Preparing 5e5a394712de: Preparing dcea01d1f799: Preparing 6a9c8a62fecd: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing a1359dc556dd: Preparing 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing 08d1c74eaf5f: Waiting 5e5a394712de: Waiting a1359dc556dd: Waiting dcea01d1f799: Waiting 490c7c373332: Waiting 4b440db36f72: Waiting 6a9c8a62fecd: Waiting 39bae602f753: Waiting 530cc55618cd: Waiting 34fa414dfdf6: Waiting 417898514f99: Pushed 8af37c1615ab: Pushed 7472fc455915: Pushed 527c5cb77ece: Pushed 1786e47e946e: Pushed 5e5a394712de: Pushed 530cc55618cd: Pushed dcea01d1f799: Pushed 34fa414dfdf6: Pushed a1359dc556dd: Pushed 39bae602f753: Mounted from kubevirt/virt-controller 490c7c373332: Pushed 08d1c74eaf5f: Pushed 6a9c8a62fecd: Pushed 4b440db36f72: Pushed devel: digest: sha256:fe117a643f0e07f8343f231cc051d4f73b1b23988d2e73be800a788e689b95a9 size: 3653 The push refers to a repository [localhost:33085/kubevirt/virt-handler] 7e37ad58f955: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher 7e37ad58f955: Pushed devel: digest: sha256:7be7ebc4341b2e202564e47e3982cb834dcb884dc18b31ba6f3f2a1a21593b41 size: 740 The push refers to a repository [localhost:33085/kubevirt/virt-api] 8f32fbfbde34: Preparing ae4970287372: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler ae4970287372: Pushed 8f32fbfbde34: Pushed devel: digest: sha256:1a750c09c54eead1ebd640ba9ccee55938a1dcd867f41e7b5d8a11a71d8b98dc size: 948 The push refers to a repository [localhost:33085/kubevirt/disks-images-provider] 5c28b30e6fcd: Preparing 153871b39e50: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api 5c28b30e6fcd: Pushed 153871b39e50: Pushed devel: digest: sha256:b93b9613b3ee226f4cdaad2e73c0ad5f4af7ef85e0c69562e55380cf5ceea111 size: 948 The push refers to a repository [localhost:33085/kubevirt/vm-killer] e3afff5758ce: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/disks-images-provider e3afff5758ce: Pushed devel: digest: sha256:6b50c4545b56f4c9c46aabcd79906788b06e5e14bd899c8b5f3d23c9d2533094 size: 740 The push refers to a repository [localhost:33085/kubevirt/registry-disk-v1alpha] 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 376d512574a4: Pushed 7971c2f81ae9: Pushed e7752b410e4c: Pushed devel: digest: sha256:1c53972d5ed0a49ec439491ba242dc02c94678ed4a038481081c247f3f02bc14 size: 948 The push refers to a repository [localhost:33085/kubevirt/cirros-registry-disk-demo] 3bb75f088338: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 376d512574a4: Mounted from kubevirt/registry-disk-v1alpha e7752b410e4c: Mounted from kubevirt/registry-disk-v1alpha 7971c2f81ae9: Mounted from kubevirt/registry-disk-v1alpha 3bb75f088338: Pushed devel: digest: sha256:f5b0674b686e038dc0bf208dbba71b412000389cc199a8fd2151acd7b24eb8ff size: 1160 The push refers to a repository [localhost:33085/kubevirt/fedora-cloud-registry-disk-demo] 47c477b9e18b: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing e7752b410e4c: Mounted from kubevirt/cirros-registry-disk-demo 376d512574a4: Mounted from kubevirt/cirros-registry-disk-demo 7971c2f81ae9: Mounted from kubevirt/cirros-registry-disk-demo 47c477b9e18b: Pushed devel: digest: sha256:82a07fc74ba4cc71d9e0544e3f207f4a6ea97ae4998e3e824fb93cada41686d5 size: 1161 The push refers to a repository [localhost:33085/kubevirt/alpine-registry-disk-demo] 1a468e9bdf3d: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 376d512574a4: Mounted from kubevirt/fedora-cloud-registry-disk-demo e7752b410e4c: Mounted from kubevirt/fedora-cloud-registry-disk-demo 7971c2f81ae9: Mounted from kubevirt/fedora-cloud-registry-disk-demo 1a468e9bdf3d: Pushed devel: digest: sha256:5a64b17c55d174194d5a59d30f328aed5ad28bcf1958ec4dae4bbd075e2ec47e size: 1160 The push refers to a repository [localhost:33085/kubevirt/subresource-access-test] 03f3d6caa479: Preparing 2aaca144a3e2: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer 2aaca144a3e2: Pushed 03f3d6caa479: Pushed devel: digest: sha256:adef6d7411ee4424cff43adcb412013c5a9878d52498011fdd4bd02fc1471212 size: 948 The push refers to a repository [localhost:33085/kubevirt/winrmcli] 3cd438b33e81: Preparing 8519683f2557: Preparing a29ba32ac0a1: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/subresource-access-test 3cd438b33e81: Pushed a29ba32ac0a1: Pushed 8519683f2557: Pushed devel: digest: sha256:b3d0809fc871fdcba8989a22d1e1e3c9e9f535e45f2e0daf4318adfec182ec71 size: 1165 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-dev1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-dev1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.6.1-2-g4158896 ++ KUBEVIRT_VERSION=v0.6.1-2-g4158896 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33085/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ wc -l ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-dev1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-dev1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.6.1-2-g4158896 ++ KUBEVIRT_VERSION=v0.6.1-2-g4158896 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33085/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-dev ]] + [[ k8s-1.10.3-dev =~ .*-dev ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created service "virt-api" created deployment.extensions "virt-api" created service "virt-controller" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-3.9.0.* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-774777b88f-swkz6 0/1 ContainerCreating 0 2s virt-controller-7947b58bbc-k4t8t 0/1 ContainerCreating 0 2s virt-controller-7947b58bbc-nlqbd 0/1 ContainerCreating 0 2s virt-handler-c9c2j 0/1 ContainerCreating 0 2s virt-handler-cjd7z 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-4nd8k 0/1 ContainerCreating 0 2s disks-images-provider-lt4h4 0/1 Pending 0 2s virt-api-774777b88f-swkz6 0/1 ContainerCreating 0 4s virt-controller-7947b58bbc-k4t8t 0/1 ContainerCreating 0 4s virt-controller-7947b58bbc-nlqbd 0/1 ContainerCreating 0 4s virt-handler-c9c2j 0/1 ContainerCreating 0 4s virt-handler-cjd7z 0/1 ContainerCreating 0 4s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-4nd8k 1/1 Running 0 38s disks-images-provider-lt4h4 1/1 Running 0 38s etcd-node01 1/1 Running 0 7m kube-apiserver-node01 1/1 Running 0 7m kube-controller-manager-node01 1/1 Running 0 7m kube-dns-86f4d74b45-dlz7l 3/3 Running 0 8m kube-flannel-ds-g4kq5 1/1 Running 0 8m kube-flannel-ds-r8drm 1/1 Running 0 7m kube-proxy-67kld 1/1 Running 0 7m kube-proxy-jg95m 1/1 Running 0 8m kube-scheduler-node01 1/1 Running 0 7m virt-api-774777b88f-swkz6 1/1 Running 0 40s virt-controller-7947b58bbc-k4t8t 1/1 Running 0 40s virt-controller-7947b58bbc-nlqbd 1/1 Running 0 40s virt-handler-c9c2j 1/1 Running 0 40s virt-handler-cjd7z 1/1 Running 0 40s + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n default --no-headers No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:2df8b30e8f619e28e75e00ea9fa42c63f4f14b1c34fbb1223214102337507863 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1530277426 Will run 126 of 126 specs • ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:132 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1256 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:138 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1256 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:149 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:191 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1256 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.018 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:149 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:207 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1256 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.006 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:225 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:241 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1256 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:225 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:249 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1256 ------------------------------ • [SLOW TEST:110.939 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:15.167 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:24.668 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • [SLOW TEST:117.088 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to implicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:363 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •••••• ------------------------------ • [SLOW TEST:5.101 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to implicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:363 with a service matching the vmi exposed /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:276 should fail to reach the vmi if an invalid servicename is used /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:307 ------------------------------ ••••••• ------------------------------ • Failure [6.131 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to explicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:366 with a service matching the vmi exposed /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:276 should be able to reach the vmi based on labels specified on the vmi [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:296 Expected : Failed to equal : Succeeded /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:305 ------------------------------ STEP: starting a pod which tries to reach the vmi via the defined service STEP: waiting for the pod to report a successful connection attempt level=info timestamp=2018-06-29T13:08:59.957180Z pos=vmi_networking_test.go:69 component=tests msg="[43 43 32 104 101 97 100 32 45 110 32 49 10 43 43 43 32 110 99 32 109 121 115 101 114 118 105 99 101 46 107 117 98 101 118 105 114 116 45 116 101 115 116 45 100 101 102 97 117 108 116 32 49 53 48 48 32 45 105 32 49 32 45 119 32 49 10 78 99 97 116 58 32 67 111 110 110 101 99 116 105 111 110 32 116 105 109 101 100 32 111 117 116 46 10 43 32 120 61 10 43 32 101 99 104 111 32 39 39 10 43 32 39 91 39 32 39 39 32 61 32 39 72 101 108 108 111 32 87 111 114 108 100 33 39 32 39 93 39 10 43 32 101 99 104 111 32 102 97 105 108 101 100 10 43 32 101 120 105 116 32 49 10 10 102 97 105 108 101 100 10]" • [SLOW TEST:5.160 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance attached to explicit pod network /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:366 with a service matching the vmi exposed /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:276 should fail to reach the vmi if an invalid servicename is used /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:307 ------------------------------ • ------------------------------ • [SLOW TEST:60.982 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:385 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:386 ------------------------------ ••• ------------------------------ • [SLOW TEST:15.202 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ •• ------------------------------ • [SLOW TEST:78.600 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:44.493 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:75.683 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ • [SLOW TEST:255.449 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 ------------------------------ • [SLOW TEST:104.580 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ • [SLOW TEST:345.469 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 ------------------------------ • [SLOW TEST:15.395 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ • [SLOW TEST:44.424 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ •••• ------------------------------ • [SLOW TEST:5.134 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to three, to two and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:12.166 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:17.400 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ •• ------------------------------ • [SLOW TEST:5.453 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 ------------------------------ • [SLOW TEST:52.673 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 ------------------------------ • [SLOW TEST:57.494 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 ------------------------------ • [SLOW TEST:54.930 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 ------------------------------ • ------------------------------ • [SLOW TEST:14.691 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:65 ------------------------------ • [SLOW TEST:15.863 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:71 ------------------------------ •••• ------------------------------ • [SLOW TEST:51.735 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:159 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:24.706 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:159 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:15.527 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:186 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:187 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:188 ------------------------------ • [SLOW TEST:14.526 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:186 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:187 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:218 ------------------------------ • [SLOW TEST:36.368 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:266 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:267 ------------------------------ • [SLOW TEST:22.189 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:289 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:290 ------------------------------ • [SLOW TEST:106.465 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:319 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:320 ------------------------------ • [SLOW TEST:87.336 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:350 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:388 ------------------------------ S [SKIPPING] [0.057 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:441 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:446 ------------------------------ S [SKIPPING] [0.060 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:441 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:446 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:502 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:522 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:518 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:59 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:502 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:559 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:518 ------------------------------ • ------------------------------ • [SLOW TEST:16.571 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:618 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:619 ------------------------------ • [SLOW TEST:33.837 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:650 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:651 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:652 ------------------------------ • [SLOW TEST:20.818 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:650 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:675 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:676 ------------------------------ • [SLOW TEST:29.998 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:727 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:728 ------------------------------ • [SLOW TEST:25.184 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:45 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:727 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:755 ------------------------------ • [SLOW TEST:50.065 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:65 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:66 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:48.220 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:65 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:66 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:129.652 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:65 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:66 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:138.059 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:65 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:66 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:51.186 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:65 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:119 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:121 ------------------------------ • [SLOW TEST:49.228 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:65 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:169 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:171 ------------------------------ • [SLOW TEST:108.860 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:65 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:169 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:188 ------------------------------ • [SLOW TEST:140.531 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:65 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:246 should start vmi multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:258 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachineinstance testvmid54cm • [SLOW TEST:56.801 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:58 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:66 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:73 Should expose a Cluster IP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:76 ------------------------------ Service node-port-vm successfully exposed for virtualmachineinstance testvmid54cm • [SLOW TEST:6.143 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:58 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:66 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:103 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:108 ------------------------------ Service cluster-ip-udp-vm successfully exposed for virtualmachineinstance testvmihrdtj • [SLOW TEST:57.252 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:58 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:145 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:152 Should expose a ClusterIP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:156 ------------------------------ Service node-port-udp-vm successfully exposed for virtualmachineinstance testvmihrdtj • [SLOW TEST:8.172 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:58 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:145 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:184 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:189 ------------------------------ Service cluster-ip-vmrs successfully exposed for vmirs replicasetph875 • [SLOW TEST:60.824 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:58 Expose service on a VM replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:232 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:265 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:269 ------------------------------ Service cluster-ip-ovm successfully exposed for virtualmachine testvmi5wzl8 • [SLOW TEST:62.973 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:58 Expose service on an Offline VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:297 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:341 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:342 ------------------------------ • [SLOW TEST:37.118 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ volumedisk0 compute • [SLOW TEST:50.346 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:43 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:54 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:61 ------------------------------ • [SLOW TEST:15.642 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:43 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:54 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:113 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.221 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:43 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:54 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:113 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:165 ------------------------------ • ------------------------------ • [SLOW TEST:50.759 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:43 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:243 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:266 ------------------------------ • [SLOW TEST:5.714 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify only admin role has access only to kubevirt-config /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:42 ------------------------------ • [SLOW TEST:5.522 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:6.050 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:6.252 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.958 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:20.312 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:37 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:48 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:49 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:50 ------------------------------ • [SLOW TEST:55.722 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 ------------------------------ • [SLOW TEST:164.961 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 ------------------------------ • [SLOW TEST:54.998 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:118 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:119 ------------------------------ • [SLOW TEST:50.135 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:161 ------------------------------ •••••••••••• ------------------------------ • [SLOW TEST:60.788 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 ------------------------------ Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 1 Failure: [Fail] Networking VirtualMachineInstance attached to explicit pod network with a service matching the vmi exposed [It] should be able to reach the vmi based on labels specified on the vmi /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:305 Ran 115 of 126 Specs in 3715.696 seconds FAIL! -- 114 Passed | 1 Failed | 0 Pending | 11 Skipped --- FAIL: TestTests (3715.71s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh