+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + echo 1 automation/test.sh: line 46: /proc/sys/net/bridge/bridge-nf-call-iptables: Permission denied + true + echo 1 automation/test.sh: line 47: /proc/sys/net/ipv4/ip_forward: Permission denied + true + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/25 07:52:46 Waiting for host: 192.168.66.101:22 2018/07/25 07:52:49 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 07:52:57 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 07:53:02 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/07/25 07:53:07 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0725 07:53:08.054249 1236 feature_gate.go:230] feature gates: &{map[]} I0725 07:53:08.166664 1236 kernel_validator.go:81] Validating kernel version I0725 07:53:08.166870 1236 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 47.503697 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:015684026b3fe2f0c3f47b983c9be502b8cf49dcd4aaf99743caec878f19cd77 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted 2018/07/25 07:54:09 Waiting for host: 192.168.66.102:22 2018/07/25 07:54:12 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 07:54:24 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0725 07:54:24.979349 1234 kernel_validator.go:81] Validating kernel version I0725 07:54:24.979634 1234 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 48s v1.11.0 node02 Ready 25s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 48s v1.11.0 node02 Ready 25s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33247/kubevirt/virt-controller:devel Untagged: localhost:33247/kubevirt/virt-controller@sha256:c1761e3fea2ac2bfbc98330217ad90c7155eed98356499e577a309d1b3522d03 Deleted: sha256:9be0b88b3a74363c75452124dfc133931eb105fc61186f745e2ce2b98c556e44 Deleted: sha256:913f20c675032fa0f36c56eb4c2433caabd35c187215657e9bbaa592422e4b5a Deleted: sha256:5e6c431ebca37d1721c03bd8b7abc9cb4a44b6a046469c9567524ab3afc8b5ae Untagged: localhost:33247/kubevirt/virt-launcher:devel Untagged: localhost:33247/kubevirt/virt-launcher@sha256:32c773534b832ac577471f1ea938f8e5d9b09dba02c24e18683ede432580348e Deleted: sha256:c4d3ddda84eb1078dc472d038f4a3960c1deb0a2bc9007e21c9128e9406d126d Deleted: sha256:6acb040fede3bd70c8d4950828751f5aaec665654f48f766d290083badff4145 Deleted: sha256:fe0a5ed0aee8381cb022b6bb90cfd80521a4708b2158fcdce285ba50ad554e41 Deleted: sha256:ddef03a0f01a04e7d1bc6f5e994558e948d8f7de63c55a2c13394cf588339cf1 Deleted: sha256:975d747a22c0fa11b042e64822aaf99d5709593b0c15c1079e1558351c79f12e Deleted: sha256:0d9076b7ac75ff2780ee817f296440e85d32840f7a678f9b3ad3d04abfe00f3c Deleted: sha256:b7f8ea30f4f0bdd52d97bbab480615c1302de9c267a264da858313f83b497001 Deleted: sha256:c9530741881d5a5eb754e8cfabe706cbe6e30fef1f5b3dfb6586a18aae785248 Deleted: sha256:cc4fb9e2d00ce1b34d7fad6c6f4518348fdc59439c794e01aa56cd55e07d80c1 Deleted: sha256:0f4fc148ac572e21ae07ea60798b09af0229cafc3d1dc70d0450d789d902ea8e Deleted: sha256:a532050ea9da5a3a382256770d8731e2f79bb066f47427e6f898ec11208c07e6 Untagged: localhost:33247/kubevirt/virt-handler:devel Untagged: localhost:33247/kubevirt/virt-handler@sha256:d8132b812ffff6d886a47027ea32cb765a69ab66ecbc0fbe2015d2a09052de6d Deleted: sha256:c87468963ff96813c2572c8b729b756de891dd9b7092f354396fdfe208f268d2 Deleted: sha256:9ebd742131e7ccdea4e031f407c1fa44f92a376f9176a852ab429b95c752fa9a Deleted: sha256:16e692d2a846ebc313dfad8f42eb444ff50fc6f85a3c7ffe280a45a5a695fbe0 Untagged: localhost:33247/kubevirt/virt-api:devel Untagged: localhost:33247/kubevirt/virt-api@sha256:d2afa48a535af8f3a5afd612ee9f9a06719f5a6b55d32fb7e4b108f7556646dc Deleted: sha256:276db35bc0e2840b50915148939de05d96a15f3420d7811922828f4cbc3d632b Deleted: sha256:c30dc3a3407ec1e167c36f322d46be57130f676570fd4d9302d9e2b9763e5cd9 Deleted: sha256:9bb7c5025048bdd5c87c2b3dc8380498e52c8c42679c48558ff995b09bef4fb7 Untagged: localhost:33247/kubevirt/subresource-access-test:devel Untagged: localhost:33247/kubevirt/subresource-access-test@sha256:a986fff428d4fd8a2e369f15ef2140229010046aa67ba03ecf4c4094118b7f5f Deleted: sha256:2e2652bb9f6825e13826218274ee21569f09feb1198cb5d63e2cf6f091aa749d Deleted: sha256:b74dfc76c2fb7abd75af958c7eb02ab60cc2018b7f999574fab82ab717ac86e5 Deleted: sha256:71ae8c70294bd0cbc5bf019e389f03511f671dc98ee384f6c57f675f118f1a92 Untagged: localhost:33247/kubevirt/example-hook-sidecar:devel Untagged: localhost:33247/kubevirt/example-hook-sidecar@sha256:47a2f20500f6adc0d926af752e0609b438630ab3ffae193003ce00145be20c88 Deleted: sha256:a232613ea41ff860d101aed571390b9a72e4b3c7a88c98f88708a3ce28f36130 Deleted: sha256:14fd1a614996f0c725ba3b79d62818f60bf925b81f415aba0911fe538e7ff23b Deleted: sha256:5e58404d74f0ec348a9b02b4ee0f26a97114dddc457a04ed912d0388adc7537c sha256:eac86de70a4e6cb392340c5eb3c9e29aa4eee64229c68e6e8a3ba9514fb773e5 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:eac86de70a4e6cb392340c5eb3c9e29aa4eee64229c68e6e8a3ba9514fb773e5 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.35 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 138d4f372f95 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> a5be079f2ad5 Step 5/8 : USER 1001 ---> Using cache ---> a8da3331f8c9 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 784a16ad0c63 Removing intermediate container d40a19a3bbcc Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in cc6d2e1662dd ---> 75b981f3c5f6 Removing intermediate container cc6d2e1662dd Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-controller" '' ---> Running in 10d39299ec57 ---> 3d9a8df5c2f6 Removing intermediate container 10d39299ec57 Successfully built 3d9a8df5c2f6 Sending build context to Docker daemon 42.63 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dc5562afdf06 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 67916fb6391a Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> b75d27cbac37 Removing intermediate container ac19b8cbdba1 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 33fbaad1f936 Removing intermediate container 292aae1af09b Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in ec71678b6d4a  ---> 7c571cb443d8 Removing intermediate container ec71678b6d4a Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in d5cec41e90c1  ---> 06f832fc2dc6 Removing intermediate container d5cec41e90c1 Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 0ab254dcdc62 Removing intermediate container 33353de0b5d1 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in 28b03d435bc7 ---> e20994340a4a Removing intermediate container 28b03d435bc7 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-launcher" '' ---> Running in adf36908e0ad ---> 7f86fa53eae5 Removing intermediate container adf36908e0ad Successfully built 7f86fa53eae5 Sending build context to Docker daemon 41.65 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 881b2db71ec1 Removing intermediate container a29f0e0ddc08 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 4226f0bcd143 ---> 8bce47d47099 Removing intermediate container 4226f0bcd143 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-handler" '' ---> Running in 1ac397ac97b3 ---> 15f32eafcc36 Removing intermediate container 1ac397ac97b3 Successfully built 15f32eafcc36 Sending build context to Docker daemon 38.75 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 425f1b8d360c Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 109325fd6af7 Step 5/8 : USER 1001 ---> Using cache ---> e638e9684a2f Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 10b1319d6bdc Removing intermediate container ac91e4a3a1f8 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in ef51e6ad0743 ---> 789e9958dfb5 Removing intermediate container ef51e6ad0743 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-api" '' ---> Running in f56997d737ca ---> 23bc503453ea Removing intermediate container f56997d737ca Successfully built 23bc503453ea Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/7 : ENV container docker ---> Using cache ---> c41fed4a1333 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 940d88594d2e Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 923b84390ce2 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> e9ddd62d459f Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> 05ced083218a Successfully built 05ced083218a Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/5 : ENV container docker ---> Using cache ---> c41fed4a1333 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 944f01c7c457 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "vm-killer" '' ---> Using cache ---> ecc928dbf04d Successfully built ecc928dbf04d Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 68f33cf86aab Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 760b7aedd755 Step 3/7 : ENV container docker ---> Using cache ---> 242765a70aa0 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> b671cb63e24f Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 96395ae20289 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 281b61469fe1 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "registry-disk-v1alpha" '' ---> Using cache ---> 1370dd9005f4 Successfully built 1370dd9005f4 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33415/kubevirt/registry-disk-v1alpha:devel ---> 1370dd9005f4 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 29eb9a168d98 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 90977b462f6c Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> 53af2f582c92 Successfully built 53af2f582c92 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33415/kubevirt/registry-disk-v1alpha:devel ---> 1370dd9005f4 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 5e66c2c72381 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> b6fea72eee5d Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> f2d829eb650d Successfully built f2d829eb650d Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33415/kubevirt/registry-disk-v1alpha:devel ---> 1370dd9005f4 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 5e66c2c72381 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> e5ea561040d8 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> d64388c7f2cb Successfully built d64388c7f2cb Sending build context to Docker daemon 35.56 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 8ded2e37f9da Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 2baf0c61c4e7 Step 5/8 : USER 1001 ---> Using cache ---> cddb35bbdd8e Step 6/8 : COPY subresource-access-test /subresource-access-test ---> e80757f87af9 Removing intermediate container eab05a46decd Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in b330c83a92ef ---> 9f45ddde289b Removing intermediate container b330c83a92ef Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "subresource-access-test" '' ---> Running in 00c611135cf6 ---> dc743ae7849a Removing intermediate container 00c611135cf6 Successfully built dc743ae7849a Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 401035e513d8 Step 3/9 : ENV container docker ---> Using cache ---> c41fed4a1333 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 4f9ac85fbee5 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 788ec0618eab Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> cc3ff134b422 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> cd908bbed6a4 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 1630fb4c77d9 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "winrmcli" '' ---> Using cache ---> ba46e8b0fbb0 Successfully built ba46e8b0fbb0 Sending build context to Docker daemon 36.77 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 43cfafb0eafc Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 7e0201cd688f Removing intermediate container 0cdfc7d5a3fb Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 12ba121958c2 ---> 66a59cc6304e Removing intermediate container 12ba121958c2 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Running in dcd71fff9a92 ---> 0dfdd5b115d0 Removing intermediate container dcd71fff9a92 Successfully built 0dfdd5b115d0 hack/build-docker.sh push The push refers to a repository [localhost:33415/kubevirt/virt-controller] 4359aab770f4: Preparing 291a040d9067: Preparing 891e1e4ef82a: Preparing 291a040d9067: Retrying in 5 seconds 291a040d9067: Retrying in 4 seconds 291a040d9067: Retrying in 3 seconds 291a040d9067: Retrying in 2 seconds 4359aab770f4: Pushed 291a040d9067: Retrying in 1 second 291a040d9067: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:6c16c55724ab9c485cd9596699cf685f22b89bae7b648f7dde1359ab627680c0 size: 949 The push refers to a repository [localhost:33415/kubevirt/virt-launcher] 1470be244a1d: Preparing ea9edfd7346e: Preparing 83168ae5f69b: Preparing 943450354a5e: Preparing 0496818c1900: Preparing 03cf24bfe08c: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing 186d8b3e4fd8: Waiting fa6154170bf5: Waiting da38cf808aa5: Waiting 5eefb9960a36: Waiting b83399358a92: Waiting 891e1e4ef82a: Waiting 03cf24bfe08c: Waiting ea9edfd7346e: Pushed 943450354a5e: Pushed 1470be244a1d: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 83168ae5f69b: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 0496818c1900: Pushed 03cf24bfe08c: Pushed 5eefb9960a36: Pushed devel: digest: sha256:c63e1e6b4b9a91f6db78c75d94e73656b6984b5a3e2ae71d5e21aa434ed3b462 size: 2828 The push refers to a repository [localhost:33415/kubevirt/virt-handler] 30f49dc21f9a: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 30f49dc21f9a: Pushed devel: digest: sha256:e641663974c761b93a7b1e8d98f705f114274b14ce7ee2ed1a5954a3ecedcc7c size: 741 The push refers to a repository [localhost:33415/kubevirt/virt-api] c4017010d3eb: Preparing c1418c9009fc: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler c1418c9009fc: Pushed c4017010d3eb: Pushed devel: digest: sha256:d67bd583a75683454b717355ee8f283191a4d73e10f86151f969a3a0d4f86ff7 size: 948 The push refers to a repository [localhost:33415/kubevirt/disks-images-provider] 080f4f9db6ce: Preparing 7270498e55cc: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 080f4f9db6ce: Pushed 7270498e55cc: Pushed devel: digest: sha256:cb94c24feeb8ecbee5fbd990f6d8a0cbe9566e9fd7c1a0d897e5a61addd98ab5 size: 948 The push refers to a repository [localhost:33415/kubevirt/vm-killer] 68a997c47b9c: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 68a997c47b9c: Pushed devel: digest: sha256:6b9412e3f7045c9ce0ffa7f2b7bbeb462bc0ac88923332fa511cf32c4c07331a size: 740 The push refers to a repository [localhost:33415/kubevirt/registry-disk-v1alpha] 0905ff81ba68: Preparing 0be79cca88bb: Preparing 25edbec0eaea: Preparing 0905ff81ba68: Pushed 0be79cca88bb: Pushed 25edbec0eaea: Pushed devel: digest: sha256:866283d8b18527730964e2445dd0bea2943f10baccd2499bf5e8e69928aa3edf size: 948 The push refers to a repository [localhost:33415/kubevirt/cirros-registry-disk-demo] 80c6ea26cfde: Preparing 0905ff81ba68: Preparing 0be79cca88bb: Preparing 25edbec0eaea: Preparing 0905ff81ba68: Mounted from kubevirt/registry-disk-v1alpha 25edbec0eaea: Mounted from kubevirt/registry-disk-v1alpha 0be79cca88bb: Mounted from kubevirt/registry-disk-v1alpha 80c6ea26cfde: Pushed devel: digest: sha256:12748e89366d6f009fdabfddf1ee6917d0eabe16955ff0458b9ff2a91bc03b4d size: 1160 The push refers to a repository [localhost:33415/kubevirt/fedora-cloud-registry-disk-demo] ce1467aaf4d1: Preparing 0905ff81ba68: Preparing 0be79cca88bb: Preparing 25edbec0eaea: Preparing 0905ff81ba68: Waiting 0be79cca88bb: Waiting 25edbec0eaea: Waiting 0905ff81ba68: Mounted from kubevirt/cirros-registry-disk-demo 0be79cca88bb: Mounted from kubevirt/cirros-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/cirros-registry-disk-demo ce1467aaf4d1: Pushed devel: digest: sha256:fcbc3036d4c94e1e82909e38d072c30c27bb129e038f148a0ca38c7c76440af6 size: 1161 The push refers to a repository [localhost:33415/kubevirt/alpine-registry-disk-demo] ae475f15e631: Preparing 0905ff81ba68: Preparing 0be79cca88bb: Preparing 25edbec0eaea: Preparing 0905ff81ba68: Waiting 0be79cca88bb: Waiting 25edbec0eaea: Waiting 0905ff81ba68: Mounted from kubevirt/fedora-cloud-registry-disk-demo 0be79cca88bb: Mounted from kubevirt/fedora-cloud-registry-disk-demo 25edbec0eaea: Mounted from kubevirt/fedora-cloud-registry-disk-demo ae475f15e631: Pushed devel: digest: sha256:f783160f6057f0c14c6fa090a70f69909738f2b1b4e1c4727cfa6f3b14c522c6 size: 1160 The push refers to a repository [localhost:33415/kubevirt/subresource-access-test] 1b5f7ec5fd9f: Preparing f11f8a160bfe: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer f11f8a160bfe: Pushed 1b5f7ec5fd9f: Pushed devel: digest: sha256:66057928cfa9452c55b5e1a19fb946127cdfaa62467851b9707f9068b1544253 size: 948 The push refers to a repository [localhost:33415/kubevirt/winrmcli] 19038f244d65: Preparing 40d75932eef1: Preparing 8acbb2baad2c: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test 19038f244d65: Pushed 8acbb2baad2c: Pushed 40d75932eef1: Pushed devel: digest: sha256:6c4706dbe9a291a8d82e1ae6b8ee8539aa180e5c59a4efa88501a59f9b1069b0 size: 1165 The push refers to a repository [localhost:33415/kubevirt/example-hook-sidecar] 4e116fa0e2c4: Preparing 39bae602f753: Preparing 4e116fa0e2c4: Pushed 39bae602f753: Pushed devel: digest: sha256:db2e7ddf7e8a8c1b0af86f20232172fe191ead293b2850eb61481ec2f6f6294d size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-109-gf89d23a ++ KUBEVIRT_VERSION=v0.7.0-109-gf89d23a + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:1e936a35d84102f96253002e463c6142c3422f4d6012ef4bdcc5e9cd6a63d359 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33415/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig ++ cluster/k8s-1.11.0/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io No resources found. Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-109-gf89d23a ++ KUBEVIRT_VERSION=v0.7.0-109-gf89d23a + source cluster/k8s-1.11.0/provider.sh ++ set -e ++ image=k8s-1.11.0@sha256:1e936a35d84102f96253002e463c6142c3422f4d6012ef4bdcc5e9cd6a63d359 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ KUBEVIRT_PROVIDER=k8s-1.11.0 ++ source hack/config-default.sh source hack/config-k8s-1.11.0.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.11.0.sh ++ source hack/config-provider-k8s-1.11.0.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.11.0/.kubectl +++ docker_prefix=localhost:33415/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-dev ]] + [[ k8s-1.10.3-dev =~ .*-dev ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R serviceaccount/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-apiserver-auth-delegator created rolebinding.rbac.authorization.k8s.io/kubevirt-apiserver created role.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-apiserver created clusterrole.rbac.authorization.k8s.io/kubevirt-controller created serviceaccount/kubevirt-controller created serviceaccount/kubevirt-privileged created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-controller-cluster-admin created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-privileged-cluster-admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:admin created clusterrole.rbac.authorization.k8s.io/kubevirt.io:edit created clusterrole.rbac.authorization.k8s.io/kubevirt.io:view created clusterrole.rbac.authorization.k8s.io/kubevirt.io:default created clusterrolebinding.rbac.authorization.k8s.io/kubevirt.io:default created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancereplicasets.kubevirt.io created service/virt-api created deployment.extensions/virt-api created service/virt-controller created deployment.extensions/virt-controller created daemonset.extensions/virt-handler created customresourcedefinition.apiextensions.k8s.io/virtualmachines.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstances.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/virtualmachineinstancepresets.kubevirt.io created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + KUBECONFIG=cluster/k8s-1.11.0/.kubeconfig + cluster/k8s-1.11.0/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim/disk-alpine created persistentvolume/host-path-disk-alpine created persistentvolumeclaim/disk-custom created persistentvolume/host-path-disk-custom created daemonset.extensions/disks-images-provider created serviceaccount/kubevirt-testing created clusterrolebinding.rbac.authorization.k8s.io/kubevirt-testing-cluster-admin created + [[ k8s-1.11.0 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79975b94-k9p2l 0/1 ContainerCreating 0 7s virt-handler-6qqwn 0/1 ContainerCreating 0 7s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running virt-api-7d79975b94-k9p2l 0/1 ContainerCreating 0 8s virt-handler-6qqwn 0/1 ContainerCreating 0 8s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n false ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + grep false false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-cbzv8 1/1 Running 0 9m coredns-78fcdf6894-hx6v9 1/1 Running 0 9m disks-images-provider-mnz5v 1/1 Running 0 1m disks-images-provider-wv8xk 1/1 Running 0 1m etcd-node01 1/1 Running 0 9m kube-apiserver-node01 1/1 Running 0 9m kube-controller-manager-node01 1/1 Running 0 9m kube-flannel-ds-8d6fs 1/1 Running 0 9m kube-flannel-ds-bf4rq 1/1 Running 0 9m kube-proxy-6jntm 1/1 Running 0 9m kube-proxy-w56s6 1/1 Running 0 9m kube-scheduler-node01 1/1 Running 0 9m virt-api-7d79975b94-k9p2l 1/1 Running 0 1m virt-controller-67dcdd8464-h8g8g 1/1 Running 0 1m virt-controller-67dcdd8464-hwd49 1/1 Running 0 1m virt-handler-6qqwn 1/1 Running 0 1m virt-handler-bcx96 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/junit.xml' + [[ k8s-1.10.3-dev =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:eac86de70a4e6cb392340c5eb3c9e29aa4eee64229c68e6e8a3ba9514fb773e5 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532505951 Will run 145 of 145 specs • [SLOW TEST:55.704 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 ------------------------------ • [SLOW TEST:163.244 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 ------------------------------ • [SLOW TEST:65.792 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:118 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:119 ------------------------------ • [SLOW TEST:48.841 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:162 ------------------------------ • [SLOW TEST:54.500 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 ------------------------------ • [SLOW TEST:54.123 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 ------------------------------ • [SLOW TEST:56.732 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 ------------------------------ • ------------------------------ • [SLOW TEST:59.981 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 ------------------------------ • [SLOW TEST:16.721 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:18.402 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:17.668 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ • ------------------------------ • [SLOW TEST:5.223 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 Without permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:56 should not be able to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:57 ------------------------------ •• ------------------------------ • [SLOW TEST:105.773 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 should succeed to generate a VM JSON file using oc-process command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:150 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1391 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 should succeed to create a VM using oc-create command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:156 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1391 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 should succeed to launch a VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:161 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1391 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 with given VMI from the VM /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:163 should succeed to terminate the VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:166 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1391 ------------------------------ • [SLOW TEST:36.083 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ •Service cluster-ip-vmi successfully exposed for virtualmachineinstance testvmisbcrj ------------------------------ • [SLOW TEST:54.254 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service cluster-ip-target-vmi successfully exposed for virtualmachineinstance testvmisbcrj •Service node-port-vmi successfully exposed for virtualmachineinstance testvmisbcrj ------------------------------ • [SLOW TEST:7.103 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:124 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:129 ------------------------------ Service cluster-ip-udp-vmi successfully exposed for virtualmachineinstance testvmi4rdcg • [SLOW TEST:58.157 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:173 Should expose a ClusterIP service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:177 ------------------------------ Service node-port-udp-vmi successfully exposed for virtualmachineinstance testvmi4rdcg • [SLOW TEST:9.130 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VMI /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:166 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:205 Should expose a NodePort service on a VMI and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:210 ------------------------------ Service cluster-ip-vmirs successfully exposed for vmirs replicasethxzkm • [SLOW TEST:80.371 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VMI replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:253 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:286 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:290 ------------------------------ Service cluster-ip-vm successfully exposed for virtualmachine testvmi6bcx6 VM testvmi6bcx6 was scheduled to start • [SLOW TEST:58.706 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:318 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:362 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:363 ------------------------------ volumedisk0 compute • [SLOW TEST:51.265 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 ------------------------------ • [SLOW TEST:17.942 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.217 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:160 ------------------------------ • ------------------------------ • [SLOW TEST:118.385 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:284 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:285 ------------------------------ • [SLOW TEST:126.402 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model equals to passthrough /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:312 should report exactly the same model as node CPU /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:313 ------------------------------ • [SLOW TEST:117.148 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:336 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:337 ------------------------------ • [SLOW TEST:50.684 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:357 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:380 ------------------------------ Pod name: disks-images-provider-mnz5v Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-wv8xk Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79975b94-k9p2l Pod phase: Running 2018/07/25 08:31:06 http: TLS handshake error from 10.244.1.1:56872: EOF level=info timestamp=2018-07-25T08:31:09.901082Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/25 08:31:16 http: TLS handshake error from 10.244.1.1:56878: EOF level=info timestamp=2018-07-25T08:31:17.588933Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=error timestamp=2018-07-25T08:31:17.697922Z pos=subresource.go:88 component=virt-api msg= 2018/07/25 08:31:17 http: response.WriteHeader on hijacked connection level=error timestamp=2018-07-25T08:31:17.698210Z pos=subresource.go:100 component=virt-api reason="read tcp 10.244.1.3:8443->10.244.0.0:36266: use of closed network connection" msg="error ecountered reading from websocket stream" level=info timestamp=2018-07-25T08:31:17.698271Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2/namespaces/kubevirt-test-default/virtualmachineinstances/testvmitdmlj/console proto=HTTP/1.1 statusCode=200 contentLength=0 level=info timestamp=2018-07-25T08:31:17.828692Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.002598Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.010987Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.158430Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.166582Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.319413Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.328171Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-67dcdd8464-fgz4n Pod phase: Running level=info timestamp=2018-07-25T08:18:13.386082Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-67dcdd8464-h8g8g Pod phase: Running level=info timestamp=2018-07-25T08:26:41.270859Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi9nt79 kind= uid=74f934a6-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:26:41.401442Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi9nt79\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi9nt79" level=info timestamp=2018-07-25T08:26:41.500839Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi9nt79\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi9nt79" level=info timestamp=2018-07-25T08:26:58.500148Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidp8vk kind= uid=7f3693fc-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:26:58.511787Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidp8vk kind= uid=7f3693fc-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:28:29.967309Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi658wh kind= uid=b5c290b0-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:28:29.970799Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi658wh kind= uid=b5c290b0-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:28:46.508076Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:28:46.509248Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:29:06.403369Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8flg5 kind= uid=cb66e407-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:29:06.555934Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8flg5 kind= uid=cb66e407-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:29:07.067921Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi8flg5\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi8flg5" level=info timestamp=2018-07-25T08:30:27.109774Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:30:27.112524Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:30:27.285865Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitdmlj\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitdmlj" Pod name: virt-handler-6qqwn Pod phase: Running level=info timestamp=2018-07-25T08:30:26.817337Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.817610Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8flg5 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.874441Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.874883Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-25T08:30:26.875091Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-25T08:30:26.875388Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.894301Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.894886Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-25T08:30:26.894969Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.895030Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.897279Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.942907Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.943033Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.943197Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.943383Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-bcx96 Pod phase: Running level=info timestamp=2018-07-25T08:31:17.171180Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-25T08:31:17.172437Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="grace period expired, killing deleted VirtualMachineInstance testvmitdmlj" level=info timestamp=2018-07-25T08:31:17.384644Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.384813Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-25T08:31:17.384841Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-25T08:31:17.384993Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.385503Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:31:17.385680Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-25T08:31:17.385859Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.386115Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.388113Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:31:17.899851Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.900013Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.900141Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.900214Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmitdmlj-cqrgt Pod phase: Running level=info timestamp=2018-07-25T08:31:17.381599Z pos=manager.go:301 component=virt-launcher namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Domain stopped." level=info timestamp=2018-07-25T08:31:17.381971Z pos=client.go:136 component=virt-launcher msg="Libvirt event 5 with reason 1 received" level=info timestamp=2018-07-25T08:31:17.383966Z pos=manager.go:312 component=virt-launcher namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Domain undefined." level=info timestamp=2018-07-25T08:31:17.384099Z pos=server.go:96 component=virt-launcher namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Signaled vmi kill" level=info timestamp=2018-07-25T08:31:17.384414Z pos=client.go:119 component=virt-launcher msg="domain status: 0:0" level=info timestamp=2018-07-25T08:31:17.386357Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T08:31:17.386447Z pos=client.go:136 component=virt-launcher msg="Libvirt event 1 with reason 0 received" level=info timestamp=2018-07-25T08:31:17.387466Z pos=client.go:119 component=virt-launcher msg="domain status: 0:0" level=info timestamp=2018-07-25T08:31:17.388630Z pos=client.go:145 component=virt-launcher msg="processed event" caught signal level=info timestamp=2018-07-25T08:31:17.899204Z pos=monitor.go:266 component=virt-launcher msg="Received signal 15." level=info timestamp=2018-07-25T08:31:18.142783Z pos=monitor.go:231 component=virt-launcher msg="Process 37c95155-6175-449d-9812-0b4d6b3c9607 and pid 182 is gone!" level=info timestamp=2018-07-25T08:31:18.143590Z pos=virt-launcher.go:234 component=virt-launcher msg="Waiting on final notifications to be sent to virt-handler." level=info timestamp=2018-07-25T08:31:18.143694Z pos=virt-launcher.go:242 component=virt-launcher msg="Final Delete notification sent" level=info timestamp=2018-07-25T08:31:18.143710Z pos=virt-launcher.go:348 component=virt-launcher msg=Exiting... • Failure [1.440 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*exec.ExitError | 0xc420873f00>: { ProcessState: { pid: 12641, status: 256, rusage: { Utime: {Sec: 0, Usec: 129617}, Stime: {Sec: 0, Usec: 38295}, Maxrss: 32296, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 7058, Majflt: 1, Nswap: 0, Inblock: 304, Oublock: 32, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 659, Nivcsw: 141, }, }, Stderr: nil, } exit status 1 not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:127 ------------------------------ STEP: verifying VIEW sa for verb get STEP: verifying EDIT sa for verb get STEP: verifying ADMIN sa for verb get STEP: verifying DEFAULT sa for verb get Pod name: disks-images-provider-mnz5v Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-wv8xk Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79975b94-k9p2l Pod phase: Running level=info timestamp=2018-07-25T08:31:17.828692Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.002598Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.010987Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.158430Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.166582Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.319413Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.328171Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.772800Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.781780Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.937183Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.945514Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.099103Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.109531Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.250551Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.259586Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-67dcdd8464-fgz4n Pod phase: Running level=info timestamp=2018-07-25T08:18:13.386082Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-67dcdd8464-h8g8g Pod phase: Running level=info timestamp=2018-07-25T08:26:41.270859Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi9nt79 kind= uid=74f934a6-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:26:41.401442Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi9nt79\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi9nt79" level=info timestamp=2018-07-25T08:26:41.500839Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi9nt79\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi9nt79" level=info timestamp=2018-07-25T08:26:58.500148Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidp8vk kind= uid=7f3693fc-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:26:58.511787Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidp8vk kind= uid=7f3693fc-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:28:29.967309Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi658wh kind= uid=b5c290b0-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:28:29.970799Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi658wh kind= uid=b5c290b0-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:28:46.508076Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:28:46.509248Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:29:06.403369Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8flg5 kind= uid=cb66e407-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:29:06.555934Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8flg5 kind= uid=cb66e407-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:29:07.067921Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi8flg5\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi8flg5" level=info timestamp=2018-07-25T08:30:27.109774Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:30:27.112524Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:30:27.285865Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitdmlj\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitdmlj" Pod name: virt-handler-6qqwn Pod phase: Running level=info timestamp=2018-07-25T08:30:26.817337Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.817610Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8flg5 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.874441Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.874883Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-25T08:30:26.875091Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-25T08:30:26.875388Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.894301Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.894886Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-25T08:30:26.894969Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.895030Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.897279Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.942907Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.943033Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.943197Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.943383Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-bcx96 Pod phase: Running level=info timestamp=2018-07-25T08:31:17.171180Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-25T08:31:17.172437Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="grace period expired, killing deleted VirtualMachineInstance testvmitdmlj" level=info timestamp=2018-07-25T08:31:17.384644Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.384813Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-25T08:31:17.384841Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-25T08:31:17.384993Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.385503Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:31:17.385680Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-25T08:31:17.385859Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.386115Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.388113Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:31:17.899851Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.900013Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.900141Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.900214Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmitdmlj-cqrgt Pod phase: Running level=info timestamp=2018-07-25T08:31:17.381971Z pos=client.go:136 component=virt-launcher msg="Libvirt event 5 with reason 1 received" level=info timestamp=2018-07-25T08:31:17.383966Z pos=manager.go:312 component=virt-launcher namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Domain undefined." level=info timestamp=2018-07-25T08:31:17.384099Z pos=server.go:96 component=virt-launcher namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Signaled vmi kill" level=info timestamp=2018-07-25T08:31:17.384414Z pos=client.go:119 component=virt-launcher msg="domain status: 0:0" level=info timestamp=2018-07-25T08:31:17.386357Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T08:31:17.386447Z pos=client.go:136 component=virt-launcher msg="Libvirt event 1 with reason 0 received" level=info timestamp=2018-07-25T08:31:17.387466Z pos=client.go:119 component=virt-launcher msg="domain status: 0:0" level=info timestamp=2018-07-25T08:31:17.388630Z pos=client.go:145 component=virt-launcher msg="processed event" caught signal level=info timestamp=2018-07-25T08:31:17.899204Z pos=monitor.go:266 component=virt-launcher msg="Received signal 15." level=info timestamp=2018-07-25T08:31:18.142783Z pos=monitor.go:231 component=virt-launcher msg="Process 37c95155-6175-449d-9812-0b4d6b3c9607 and pid 182 is gone!" level=info timestamp=2018-07-25T08:31:18.143590Z pos=virt-launcher.go:234 component=virt-launcher msg="Waiting on final notifications to be sent to virt-handler." level=info timestamp=2018-07-25T08:31:18.143694Z pos=virt-launcher.go:242 component=virt-launcher msg="Final Delete notification sent" level=info timestamp=2018-07-25T08:31:18.143710Z pos=virt-launcher.go:348 component=virt-launcher msg=Exiting... virt-launcher exited with code 0 • Failure [0.817 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*exec.ExitError | 0xc4209d8600>: { ProcessState: { pid: 12697, status: 256, rusage: { Utime: {Sec: 0, Usec: 142223}, Stime: {Sec: 0, Usec: 23533}, Maxrss: 32296, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 7202, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 32, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 457, Nivcsw: 190, }, }, Stderr: nil, } exit status 1 not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:127 ------------------------------ STEP: verifying VIEW sa for verb get STEP: verifying EDIT sa for verb get STEP: verifying ADMIN sa for verb get STEP: verifying DEFAULT sa for verb get Pod name: disks-images-provider-mnz5v Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-wv8xk Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79975b94-k9p2l Pod phase: Running level=info timestamp=2018-07-25T08:31:18.781780Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.937183Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:18.945514Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.099103Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.109531Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.250551Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.259586Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.587906Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.598228Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.741476Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.751048Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.904946Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.917587Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.053116Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.061355Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-67dcdd8464-fgz4n Pod phase: Running level=info timestamp=2018-07-25T08:18:13.386082Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-67dcdd8464-h8g8g Pod phase: Running level=info timestamp=2018-07-25T08:26:41.270859Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi9nt79 kind= uid=74f934a6-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:26:41.401442Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi9nt79\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi9nt79" level=info timestamp=2018-07-25T08:26:41.500839Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi9nt79\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi9nt79" level=info timestamp=2018-07-25T08:26:58.500148Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidp8vk kind= uid=7f3693fc-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:26:58.511787Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidp8vk kind= uid=7f3693fc-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:28:29.967309Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi658wh kind= uid=b5c290b0-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:28:29.970799Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi658wh kind= uid=b5c290b0-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:28:46.508076Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:28:46.509248Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:29:06.403369Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8flg5 kind= uid=cb66e407-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:29:06.555934Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8flg5 kind= uid=cb66e407-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:29:07.067921Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi8flg5\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi8flg5" level=info timestamp=2018-07-25T08:30:27.109774Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:30:27.112524Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:30:27.285865Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitdmlj\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitdmlj" Pod name: virt-handler-6qqwn Pod phase: Running level=info timestamp=2018-07-25T08:30:26.817337Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.817610Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8flg5 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.874441Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.874883Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-25T08:30:26.875091Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-25T08:30:26.875388Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.894301Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.894886Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-25T08:30:26.894969Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.895030Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.897279Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.942907Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.943033Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.943197Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.943383Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-bcx96 Pod phase: Running level=info timestamp=2018-07-25T08:31:17.171180Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-25T08:31:17.172437Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="grace period expired, killing deleted VirtualMachineInstance testvmitdmlj" level=info timestamp=2018-07-25T08:31:17.384644Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.384813Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-25T08:31:17.384841Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-25T08:31:17.384993Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.385503Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:31:17.385680Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-25T08:31:17.385859Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.386115Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.388113Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:31:17.899851Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.900013Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.900141Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.900214Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmitdmlj-cqrgt Pod phase: Running level=info timestamp=2018-07-25T08:31:17.381971Z pos=client.go:136 component=virt-launcher msg="Libvirt event 5 with reason 1 received" level=info timestamp=2018-07-25T08:31:17.383966Z pos=manager.go:312 component=virt-launcher namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Domain undefined." level=info timestamp=2018-07-25T08:31:17.384099Z pos=server.go:96 component=virt-launcher namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Signaled vmi kill" level=info timestamp=2018-07-25T08:31:17.384414Z pos=client.go:119 component=virt-launcher msg="domain status: 0:0" level=info timestamp=2018-07-25T08:31:17.386357Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T08:31:17.386447Z pos=client.go:136 component=virt-launcher msg="Libvirt event 1 with reason 0 received" level=info timestamp=2018-07-25T08:31:17.387466Z pos=client.go:119 component=virt-launcher msg="domain status: 0:0" level=info timestamp=2018-07-25T08:31:17.388630Z pos=client.go:145 component=virt-launcher msg="processed event" caught signal level=info timestamp=2018-07-25T08:31:17.899204Z pos=monitor.go:266 component=virt-launcher msg="Received signal 15." level=info timestamp=2018-07-25T08:31:18.142783Z pos=monitor.go:231 component=virt-launcher msg="Process 37c95155-6175-449d-9812-0b4d6b3c9607 and pid 182 is gone!" level=info timestamp=2018-07-25T08:31:18.143590Z pos=virt-launcher.go:234 component=virt-launcher msg="Waiting on final notifications to be sent to virt-handler." level=info timestamp=2018-07-25T08:31:18.143694Z pos=virt-launcher.go:242 component=virt-launcher msg="Final Delete notification sent" level=info timestamp=2018-07-25T08:31:18.143710Z pos=virt-launcher.go:348 component=virt-launcher msg=Exiting... virt-launcher exited with code 0 • Failure [0.802 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*exec.ExitError | 0xc420642780>: { ProcessState: { pid: 12750, status: 256, rusage: { Utime: {Sec: 0, Usec: 141667}, Stime: {Sec: 0, Usec: 27284}, Maxrss: 32296, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 7021, Majflt: 1, Nswap: 0, Inblock: 256, Oublock: 32, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 579, Nivcsw: 138, }, }, Stderr: nil, } exit status 1 not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:127 ------------------------------ STEP: verifying VIEW sa for verb get STEP: verifying EDIT sa for verb get STEP: verifying ADMIN sa for verb get STEP: verifying DEFAULT sa for verb get Pod name: disks-images-provider-mnz5v Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-wv8xk Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79975b94-k9p2l Pod phase: Running level=info timestamp=2018-07-25T08:31:19.598228Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.741476Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.751048Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.904946Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:19.917587Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.053116Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.061355Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.376229Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.385820Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.525304Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.533985Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.671887Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.680501Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.819852Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-25T08:31:20.830483Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url="/apis/subresources.kubevirt.io/v1alpha2?timeout=32s" proto=HTTP/2.0 statusCode=200 contentLength=136 Pod name: virt-controller-67dcdd8464-fgz4n Pod phase: Running level=info timestamp=2018-07-25T08:18:13.386082Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-67dcdd8464-h8g8g Pod phase: Running level=info timestamp=2018-07-25T08:26:41.270859Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi9nt79 kind= uid=74f934a6-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:26:41.401442Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi9nt79\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi9nt79" level=info timestamp=2018-07-25T08:26:41.500839Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi9nt79\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi9nt79" level=info timestamp=2018-07-25T08:26:58.500148Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidp8vk kind= uid=7f3693fc-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:26:58.511787Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmidp8vk kind= uid=7f3693fc-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:28:29.967309Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi658wh kind= uid=b5c290b0-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:28:29.970799Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi658wh kind= uid=b5c290b0-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:28:46.508076Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:28:46.509248Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:29:06.403369Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8flg5 kind= uid=cb66e407-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:29:06.555934Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmi8flg5 kind= uid=cb66e407-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:29:07.067921Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi8flg5\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi8flg5" level=info timestamp=2018-07-25T08:30:27.109774Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-25T08:30:27.112524Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-25T08:30:27.285865Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmitdmlj\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmitdmlj" Pod name: virt-handler-6qqwn Pod phase: Running level=info timestamp=2018-07-25T08:30:26.817337Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.817610Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmi8flg5 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.874441Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind= uid=bf9e1424-8fe4-11e8-a068-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.874883Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-25T08:30:26.875091Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-25T08:30:26.875388Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.894301Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.894886Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-25T08:30:26.894969Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.895030Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.897279Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:30:26.942907Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.943033Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:30:26.943197Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:30:26.943383Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib4ltf kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-handler-bcx96 Pod phase: Running level=info timestamp=2018-07-25T08:31:17.171180Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-25T08:31:17.172437Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="grace period expired, killing deleted VirtualMachineInstance testvmitdmlj" level=info timestamp=2018-07-25T08:31:17.384644Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.384813Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-25T08:31:17.384841Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-25T08:31:17.384993Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.385503Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:31:17.385680Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-25T08:31:17.385859Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.386115Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.388113Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-25T08:31:17.899851Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.900013Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-25T08:31:17.900141Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-25T08:31:17.900214Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmitdmlj kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmitdmlj-cqrgt Pod phase: Running level=info timestamp=2018-07-25T08:31:17.381971Z pos=client.go:136 component=virt-launcher msg="Libvirt event 5 with reason 1 received" level=info timestamp=2018-07-25T08:31:17.383966Z pos=manager.go:312 component=virt-launcher namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Domain undefined." level=info timestamp=2018-07-25T08:31:17.384099Z pos=server.go:96 component=virt-launcher namespace=kubevirt-test-default name=testvmitdmlj kind= uid=fb8f81dd-8fe4-11e8-a068-525500d15501 msg="Signaled vmi kill" level=info timestamp=2018-07-25T08:31:17.384414Z pos=client.go:119 component=virt-launcher msg="domain status: 0:0" level=info timestamp=2018-07-25T08:31:17.386357Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-25T08:31:17.386447Z pos=client.go:136 component=virt-launcher msg="Libvirt event 1 with reason 0 received" level=info timestamp=2018-07-25T08:31:17.387466Z pos=client.go:119 component=virt-launcher msg="domain status: 0:0" level=info timestamp=2018-07-25T08:31:17.388630Z pos=client.go:145 component=virt-launcher msg="processed event" caught signal level=info timestamp=2018-07-25T08:31:17.899204Z pos=monitor.go:266 component=virt-launcher msg="Received signal 15." level=info timestamp=2018-07-25T08:31:18.142783Z pos=monitor.go:231 component=virt-launcher msg="Process 37c95155-6175-449d-9812-0b4d6b3c9607 and pid 182 is gone!" level=info timestamp=2018-07-25T08:31:18.143590Z pos=virt-launcher.go:234 component=virt-launcher msg="Waiting on final notifications to be sent to virt-handler." level=info timestamp=2018-07-25T08:31:18.143694Z pos=virt-launcher.go:242 component=virt-launcher msg="Final Delete notification sent" level=info timestamp=2018-07-25T08:31:18.143710Z pos=virt-launcher.go:348 component=virt-launcher msg=Exiting... virt-launcher exited with code 0 • Failure [0.793 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Expected error: <*exec.ExitError | 0xc4203fb8a0>: { ProcessState: { pid: 12805, status: 256, rusage: { Utime: {Sec: 0, Usec: 126960}, Stime: {Sec: 0, Usec: 23438}, Maxrss: 32296, Ixrss: 0, Idrss: 0, Isrss: 0, Minflt: 7011, Majflt: 0, Nswap: 0, Inblock: 0, Oublock: 32, Msgsnd: 0, Msgrcv: 0, Nsignals: 0, Nvcsw: 379, Nivcsw: 99, }, }, Stderr: nil, } exit status 1 not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:127 ------------------------------ STEP: verifying VIEW sa for verb get STEP: verifying EDIT sa for verb get STEP: verifying ADMIN sa for verb get STEP: verifying DEFAULT sa for verb get • [SLOW TEST:52.460 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:48.888 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:131.022 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:118.396 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:54.092 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 ------------------------------ • [SLOW TEST:51.502 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 ------------------------------ • [SLOW TEST:49.194 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 ------------------------------ • [SLOW TEST:112.799 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 ------------------------------ • [SLOW TEST:140.232 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 ------------------------------ • [SLOW TEST:75.735 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:15.459 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:24.798 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • [SLOW TEST:120.079 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •••••••••• ------------------------------ • [SLOW TEST:55.707 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:379 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:380 ------------------------------ • ------------------------------ • [SLOW TEST:55.598 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:413 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:414 ------------------------------ • [SLOW TEST:57.816 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address in non-conventional format /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:425 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:426 ------------------------------ • [SLOW TEST:54.834 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address and slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:438 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:439 ------------------------------ • [SLOW TEST:58.676 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with disabled automatic attachment of interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:451 should not configure any external interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:452 ------------------------------ • ------------------------------ • [SLOW TEST:16.161 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:76 ------------------------------ • [SLOW TEST:16.866 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:82 ------------------------------ •••• ------------------------------ • [SLOW TEST:52.319 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:26.083 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:15.430 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:203 ------------------------------ • [SLOW TEST:16.113 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:233 ------------------------------ • [SLOW TEST:40.386 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:281 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:282 ------------------------------ • [SLOW TEST:24.238 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:304 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:305 ------------------------------ • [SLOW TEST:16.284 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:335 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:336 ------------------------------ • [SLOW TEST:68.822 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:366 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:405 ------------------------------ • [SLOW TEST:17.166 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with node tainted /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:458 the vmi with tolerations should be scheduled /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:480 ------------------------------ • ------------------------------ S [SKIPPING] [0.235 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:530 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 ------------------------------ S [SKIPPING] [0.057 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:530 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:535 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.053 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:591 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:603 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:599 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.055 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:591 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:640 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:599 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.049 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:591 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:684 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:599 ------------------------------ •••• ------------------------------ • [SLOW TEST:17.705 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:836 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:837 ------------------------------ • [SLOW TEST:35.347 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:868 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:869 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:870 ------------------------------ • [SLOW TEST:20.882 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:868 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:893 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:894 ------------------------------ • [SLOW TEST:29.188 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:945 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:946 ------------------------------ • [SLOW TEST:23.035 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:945 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:973 ------------------------------ •• ------------------------------ • [SLOW TEST:15.119 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ • [SLOW TEST:8.257 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should remove VirtualMachineInstance once the VMI is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:204 ------------------------------ • ------------------------------ • [SLOW TEST:21.433 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:60.608 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:44.651 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ • [SLOW TEST:146.990 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 ------------------------------ • [SLOW TEST:55.400 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ • [SLOW TEST:183.899 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 ------------------------------ VM testvmim6kpx was scheduled to start • [SLOW TEST:17.356 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ VM testvmi2ggts was scheduled to stop • [SLOW TEST:35.429 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ • ------------------------------ • [SLOW TEST:7.023 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:18.643 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ •• ------------------------------ • [SLOW TEST:5.464 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 ------------------------------ • ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1350 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1350 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1350 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1350 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1350 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1350 ------------------------------ ••••••••••• ------------------------------ • [SLOW TEST:16.404 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ •• Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 4 Failures: [Fail] User Access With default kubevirt service accounts should verify permissions are correct for view, edit, and admin [It] given a vmi /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:127 [Fail] User Access With default kubevirt service accounts should verify permissions are correct for view, edit, and admin [It] given an vm /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:127 [Fail] User Access With default kubevirt service accounts should verify permissions are correct for view, edit, and admin [It] given a vmi preset /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:127 [Fail] User Access With default kubevirt service accounts should verify permissions are correct for view, edit, and admin [It] given a vmi replica set /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:127 Ran 129 of 145 Specs in 3966.969 seconds FAIL! -- 125 Passed | 4 Failed | 0 Pending | 16 Skipped --- FAIL: TestTests (3967.00s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh