+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.9.3 + KUBEVIRT_PROVIDER=k8s-1.9.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/06/05 14:35:55 Waiting for host: 192.168.66.101:22 2018/06/05 14:35:58 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/05 14:36:10 Connected to tcp://192.168.66.101:22 + cat + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.9.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 28.506688 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --discovery-token-ca-cert-hash sha256:7c71442ab3820897447e155cf2e8df6776161f4dbf07af1e2edc26d0bf8f3ef8 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/06/05 14:36:52 Waiting for host: 192.168.66.102:22 2018/06/05 14:36:55 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/05 14:37:07 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 48668048 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep -v Ready + '[' -n '' ']' + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 50s v1.9.3 node02 Ready 21s v1.9.3 + make cluster-sync ./cluster/build.sh Building ... sha256:8b33043feeb10b27572d8053bdc7179a9496c83ded2be2221ae64b435ce6e0af go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:8b33043feeb10b27572d8053bdc7179a9496c83ded2be2221ae64b435ce6e0af go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 36.17 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b0110ac54e8d Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> ccd49c19cf6c Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 75dc311c8fb2 Step 5/8 : USER 1001 ---> Using cache ---> 698bd8b43680 Step 6/8 : COPY virt-controller /virt-controller ---> 7e5501ebdfb6 Removing intermediate container ead72229cc29 Step 7/8 : ENTRYPOINT /virt-controller ---> Running in 29ddbc69691b ---> 355622f80d07 Removing intermediate container 29ddbc69691b Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "virt-controller" '' ---> Running in 98218a6ebf88 ---> 7c23f21cc6b1 Removing intermediate container 98218a6ebf88 Successfully built 7c23f21cc6b1 Sending build context to Docker daemon 38.13 MB Step 1/14 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/14 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0a530d8cad4e Step 3/14 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> d9efcdeb20bf Step 4/14 : COPY sock-connector /sock-connector ---> Using cache ---> 929dfaf3db96 Step 5/14 : COPY sh.sh /sh.sh ---> Using cache ---> d5002fc25dc4 Step 6/14 : COPY virt-launcher /virt-launcher ---> f589a529cced Removing intermediate container ab7257d68d32 Step 7/14 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 05f26c1f49fc Removing intermediate container 92169b9aff7e Step 8/14 : RUN chmod 0640 /etc/sudoers.d/kubevirt ---> Running in 135610f63b6f  ---> 5050d91ead21 Removing intermediate container 135610f63b6f Step 9/14 : RUN rm -f /libvirtd.sh ---> Running in b47e8f33461d  ---> 375a6f6ba0ed Removing intermediate container b47e8f33461d Step 10/14 : COPY libvirtd.sh /libvirtd.sh ---> 78e8770ce0c7 Removing intermediate container 7e821908c2eb Step 11/14 : RUN chmod a+x /libvirtd.sh ---> Running in 89e53dd05a12  ---> e3490c254619 Removing intermediate container 89e53dd05a12 Step 12/14 : COPY entrypoint.sh /entrypoint.sh ---> 62fb73ce7306 Removing intermediate container 5b81a747d0db Step 13/14 : ENTRYPOINT /entrypoint.sh ---> Running in 72ddd0d874c6 ---> b55ecd54cdc5 Removing intermediate container 72ddd0d874c6 Step 14/14 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "virt-launcher" '' ---> Running in 21befb3c1b90 ---> a780a524e537 Removing intermediate container 21befb3c1b90 Successfully built a780a524e537 Sending build context to Docker daemon 36.72 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b0110ac54e8d Step 3/5 : COPY virt-handler /virt-handler ---> d81590049e05 Removing intermediate container fb80628b942e Step 4/5 : ENTRYPOINT /virt-handler ---> Running in 9ac35c069f12 ---> 9d35ebcd7cea Removing intermediate container 9ac35c069f12 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "virt-handler" '' ---> Running in 25772832404a ---> eb5d92453ca8 Removing intermediate container 25772832404a Successfully built eb5d92453ca8 Sending build context to Docker daemon 36.89 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b0110ac54e8d Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 4bf1b014ced0 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 2291af9afcbb Step 5/8 : USER 1001 ---> Using cache ---> 94008214adaa Step 6/8 : COPY virt-api /virt-api ---> 88f3d9cc6282 Removing intermediate container 00b3473258c3 Step 7/8 : ENTRYPOINT /virt-api ---> Running in 2fe55ff69532 ---> 20eed0b77ad6 Removing intermediate container 2fe55ff69532 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "virt-api" '' ---> Running in c266205c4380 ---> d58123bb0076 Removing intermediate container c266205c4380 Successfully built d58123bb0076 Sending build context to Docker daemon 6.656 kB Step 1/10 : FROM fedora:27 ---> 9110ae7f579f Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b0110ac54e8d Step 3/10 : ENV container docker ---> Using cache ---> 09a0eb53efc4 Step 4/10 : RUN dnf -y install scsi-target-utils bzip2 e2fsprogs ---> Using cache ---> ec7fa13faf0e Step 5/10 : RUN mkdir -p /images ---> Using cache ---> eda3b84d1450 Step 6/10 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/1-alpine.img ---> Using cache ---> c87921a2a927 Step 7/10 : ADD run-tgt.sh / ---> Using cache ---> 3fe7ebf8a604 Step 8/10 : EXPOSE 3260 ---> Using cache ---> 53d30acf30c6 Step 9/10 : CMD /run-tgt.sh ---> Using cache ---> cf16e6eb9867 Step 10/10 : LABEL "iscsi-demo-target-tgtd" '' "kubevirt-functional-tests-k8s-1.10.3-dev1" '' ---> Running in aacd8a5ed85a ---> 0c01af9a455a Removing intermediate container aacd8a5ed85a Successfully built 0c01af9a455a Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b0110ac54e8d Step 3/5 : ENV container docker ---> Using cache ---> 09a0eb53efc4 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> b56a90c7fd64 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "vm-killer" '' ---> Running in 859b29ec53a3 ---> c370d1aac627 Removing intermediate container 859b29ec53a3 Successfully built c370d1aac627 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> bcec0ae8107e Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 313c78fd9693 Step 3/7 : ENV container docker ---> Using cache ---> a0801069f3af Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 003548c7ad90 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> cfb88c6b6cb0 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 791630a9414e Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "registry-disk-v1alpha" '' ---> Running in 86563a569a21 ---> 050f9523d521 Removing intermediate container 86563a569a21 Successfully built 050f9523d521 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:36108/kubevirt/registry-disk-v1alpha:devel ---> 050f9523d521 Step 2/4 : MAINTAINER "David Vossel" \ ---> Running in 4eb1e95a0f1e ---> 8c51755b0516 Removing intermediate container 4eb1e95a0f1e Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Running in 03a66964721d  % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left  Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 9 12.1M 9 1168k 0 0 932k 0 0:00:13 0:00:01 0:00:12 932k 88 12.1M 88 10.7M 0 0 4880k 0 0:00:02 0:00:02 --:--:-- 4878k 100 12.1M 100 12.1M 0 0 5246k 0 0:00:02 0:00:02 --:--:-- 5246k  ---> 54171f561d5f Removing intermediate container 03a66964721d Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev1" '' ---> Running in 738664089bb7 ---> 2a4de19069af Removing intermediate container 738664089bb7 Successfully built 2a4de19069af Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:36108/kubevirt/registry-disk-v1alpha:devel ---> 050f9523d521 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Running in e10d5e3b624f ---> a862626adbbc Removing intermediate container e10d5e3b624f Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Running in 7bc0ae0a7208  % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0  0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0  0 221M 0 2163k 0 0 1983k 0 0:01:54 0:00:01 0:01:53 1983k 2 221M 2 6325k 0 0 3066k 0 0:01:13 0:00:02 0:01:11 4281k 3 221M 3 8820k 0 0 2886k 0 0:01:18 0:00:03 0:01:15 3387k 4 221M 4 10.9M 0 0 2757k 0 0:01:22 0:00:04 0:01:18 3042k 6 221M 6 13.3M 0 0 2708k 0 0:01:23 0:00:05 0:01:18 2908k 7 221M 7 15.6M 0 0 2648k 0 0:01:25 0:00:06 0:01:19 2793k 7 221M 7 17.2M 0 0 2499k 0 0:01:30 0:00:07 0:01:23 2264k 8 221M 8 18.6M 0 0 2368k 0 0:01:35 0:00:08 0:01:27 2051k 8 221M 8 19.9M 0 0 2250k 0 0:01:40 0:00:09 0:01:31 1838k 9 221M 9 21.2M 0 0 2166k 0 0:01:44 0:00:10 0:01:34 1619k 10 221M 10 22.6M 0 0 2093k 0 0:01:48 0:00:11 0:01:37 1421k 10 221M 10 23.4M 0 0 1987k 0 0:01:54 0:00:12 0:01:42 1271k 10 221M 10 24.0M 0 0 1888k 0 0:02:00 0:00:13 0:01:47 1116k 11 221M 11 24.6M 0 0 1797k 0 0:02:06 0:00:14 0:01:52 978k 11 221M 11 25.4M 0 0 1729k 0 0:02:11 0:00:15 0:01:56 847k 11 221M 11 26.3M 0 0 1679k 0 0:02:15 0:00:16 0:01:59 764k 12 221M 12 27.4M 0 0 1643k 0 0:02:18 0:00:17 0:02:01 807k 12 221M 12 28.5M 0 0 1620k  0 0:02:20 0:00:18 0:02:02 917k 13 221M 13 29.9M 0 0 1606k 0 0:02:21 0:00:19 0:02:02 1069k 14 221M 14 31.2M 0 0 1594k 0 0:02:22 0:00:20 0:02:02 1190k 14 221M 14 31.9M 0 0 1553k  0 0:02:26 0:00:21 0:02:05 1149k 14 221M 14 32.6M 0 0 1516k 0 0:02:29 0:00:22 0:02:07 1080k 15 221M 15 33.3M 0 0 1478k 0 0:02:33 0:00:23 0:02:10 970k 15 221M 15 33.9M 0 0 1445k 0 0:02:36 0:00:24 0:02:12 828k 15 221M 15 34.6M 0 0 1413k 0 0:02:40 0:00:25 0:02:15 687k 15 221M 15 35.4M 0 0 1391k 0 0:02:43 0:00:26 0:02:17 709k 16 221M 16 36.3M 0 0 1376k 0 0:02:44 0:00:27 0:02:17 760k 16 221M 16 37.4M 0 0 1366k 0 0:02:46 0:00:28 0:02:18 847k 17 221M 17 38.7M 0 0 1363k 0 0:02:46 0:00:29 0:02:17 973k 18 221M 18 40.0M 0 0 1364k 0 0:02:46 0:00:30 0:02:16 1114k 18 221M 18 41.5M 0 0 1370k  0 0:02:45 0:00:31 0:02:14 1259k 19 221M 19 43.4M 0 0 1387k 0 0:02:43 0:00:32 0:02:11 1445k 20 221M 20 45.8M 0 0 1418k 0 0:02:39 0:00:33 0:02:06 1713k 21 221M 21 48.1M 0 0 1448k 0 0:02:36 0:00:34 0:02:02 1938k 22 221M 22 50.8M 0 0 1484k 0 0:02:32 0:00:35 0:01:57 2207k 24 221M 24 53.6M 0 0 1521k 0 0:02:29 0:00:36 0:01:53 2457k 25 221M 25 55.7M 0 0 1540k 0 0:02:27 0:00:37 0:01:50 2528k 26 221M 26 57.8M 0 0 1557k 0 0:02:25 0:00:38 0:01:47 2477k 26 221M 26 59.2M 0 0 1552k 0 0:02:26 0:00:39 0:01:47 2266k 27 221M 27 60.0M 0 0 1535k 0 0:02:27 0:00:40 0:01:47 1890k 27 221M 27 60.8M 0 0 1518k 0 0:02:29 0:00:41 0:01:48 1492k 27 221M 27 61.8M 0 0 1506k 0 0:02:30 0:00:42 0:01:48 1249k 28 221M 28 62.8M 0 0 1494k 0 0:02:31 0:00:43 0:01:48 1012k 28 221M 28 63.6M 0 0 1478k 0 0:02:33 0:00:44 0:01:49 901k 29 221M 29 64.3M 0 0 1463k 0 0:02:35 0:00:45 0:01:50 884k 29 221M 29 64.9M 0 0 1444k 0 0:02:37 0:00:46 0:01:51 840k 29 221M 29 65.6M 0 0 1428k 0 0:02:38 0:00:47 0:01:51 771k 29 221M 29 66.2M 0 0 1412k 0 0:02:40 0:00:48 0:01:52 709k 30 221M 30 66.9M 0 0 1398k 0 0:02:42 0:00:49 0:01:53 685k 30 221M 30 67.8M 0 0 1386k  0 0:02:43 0:00:50 0:01:53 701k 31 221M 31 68.7M 0 0 1379k 0 0:02:44 0:00:51 0:01:53 775k 31 221M 31 69.7M 0 0 1371k 0 0:02:45 0:00:52 0:01:53 840k 31 221M 31 70.7M 0 0 1364k 0 0:02:46 0:00:53 0:01:53 907k 32 221M 32 71.8M 0 0 1360k 0 0:02:46 0:00:54 0:01:52 990k 32 221M 32 73.0M 0 0 1357k 0 0:02:47 0:00:55 0:01:52 1066k 33 221M 33 74.3M 0 0 1358k 0 0:02:47 0:00:56 0:01:51 1144k 34 221M 34 75.5M 0 0 1355k  0 0:02:47 0:00:57 0:01:50 1185k 34 221M 34 76.4M 0 0 1347k 0 0:02:48 0:00:58 0:01:50 1164k 34 221M 34 77.4M 0 0 1342k 0 0:02:48 0:00:59 0:01:49 1151k 35 221M 35 78.3M 0 0 1336k 0 0:02:49 0:01:00 0:01:49 1102k 35 221M 35 79.3M 0 0 1331k 0 0:02:50 0:01:01 0:01:49 1031k 36 221M 36 80.5M 0 0 1328k 0 0:02:50 0:01:02 0:01:48 1019k 36 221M 36 81.7M 0 0 1327k 0 0:02:50 0:01:03 0:01:47 1090k 37 221M 37 82.5M 0 0 1319k 0 0:02:52 0:01:04 0:01:48 1039k 37 221M 37 83.1M 0 0 1308k  0 0:02:53 0:01:05 0:01:48 967k 37 221M 37 83.8M 0 0 1300k 0 0:02:54 0:01:06 0:01:48 916k 38 221M 38 84.5M 0 0 1290k 0 0:02:55 0:01:07 0:01:48 823k 38 221M 38 85.3M 0 0 1283k 0 0:02:56 0:01:08 0:01:48 733k 38 221M 38 86.2M 0 0 1277k 0 0:02:57 0:01:09 0:01:48 753k 39 221M 39 87.1M 0 0 1274k  0 0:02:58 0:01:10 0:01:48 833k 39 221M 39 88.0M 0 0 1269k 0 0:02:58 0:01:11 0:01:47 866k 40 221M 40 89.1M 0 0 1266k 0 0:02:59 0:01:12 0:01:47 944k 40 221M 40 90.2M 0 0 1265k 0 0:02:59 0:01:13 0:01:46 1011k 41 221M 41 91.1M 0 0 1259k 0 0:03:00 0:01:14 0:01:46 1002k 41 221M 41 91.7M 0 0 1251k 0 0:03:01 0:01:15 0:01:46 935k 41 221M 41 92.3M 0 0 1242k 0 0:03:02 0:01:16 0:01:46 864k 41 221M 41 92.9M 0 0 1235k 0 0:03:03 0:01:17 0:01:46 782k 42 221M 42 93.7M 0 0 1229k 0 0:03:04 0:01:18 0:01:46 708k 42 221M 42 94.4M 0 0 1222k 0 0:03:05 0:01:19 0:01:46 681k 42 221M 42 95.2M 0 0 1217k 0 0:03:06 0:01:20 0:01:46 709k 43 221M 43 96.1M 0 0 1214k 0 0:03:06 0:01:21 0:01:45 785k 43 221M 43 97.2M 0 0 1213k 0 0:03:06 0:01:22 0:01:44 878k 44 221M 44 98.4M 0 0 1214k 0 0:03:06 0:01:23 0:01:43 975k 45 221M 45 99.8M 0 0 1216k 0 0:03:06 0:01:24 0:01:42 1115k 45 221M 45 101M 0 0 1220k 0 0:03:05 0:01:25 0:01:40 1263k 46 221M 46 103M 0 0 1226k 0 0:03:05 0:01:26 0:01:39 1414k 47 221M 47 104M 0 0 1229k 0 0:03:04 0:01:27 0:01:37 1491k 47 221M 47 106M 0 0 1234k 0 0:03:03 0:01:28 0:01:35 1565k 48 221M 48 107M 0 0 1237k 0 0:03:03 0:01:29 0:01:34 1584k 49 221M 49 108M 0 0 1237k  0 0:03:03 0:01:30 0:01:33 1532k 49 221M 49 109M 0 0 1236k 0 0:03:03 0:01:31 0:01:32 1416k 50 221M 50 111M 0 0 1234k 0 0:03:03 0:01:32 0:01:31 1331k 50 221M 50 112M 0 0 1233k 0 0:03:03 0:01:33 0:01:30 1228k 51 221M 51 113M 0 0 1233k 0 0:03:03 0:01:34 0:01:29 1169k 51 221M 51 114M 0 0 1234k 0 0:03:03 0:01:35 0:01:28 1179k 52 221M 52 115M 0 0 1230k  0 0:03:04 0:01:36 0:01:28 1112k 52 221M 52 115M 0 0 1220k 0 0:03:05 0:01:37 0:01:28 947k 52 221M 52 115M 0 0 1210k  0 0:03:07 0:01:38 0:01:29 786k 52 221M 52 116M 0 0 1201k  0 0:03:08 0:01:39 0:01:29 595k 52 221M 52 116M 0 0 1192k 0 0:03:10 0:01:40 0:01:30 394k 52 221M 52 116M 0 0 1183k  0 0:03:11 0:01:41 0:01:30 296k 52 221M 52 117M 0 0 1176k 0 0:03:12 0:01:42 0:01:30 324k 53 221M 53 117M 0 0 1168k 0 0:03:14 0:01:43 0:01:31 345k 53 221M 53 118M 0 0 1161k 0 0:03:15 0:01:44 0:01:31 376k 53 221M 53 118M 0 0 1154k 0 0:03:16 0:01:45 0:01:31 389k 53 221M 53 118M 0 0 1148k 0 0:03:17 0:01:46 0:01:31 435k 53 221M 53 119M 0 0 1143k 0 0:03:18 0:01:47 0:01:31 474k 54 221M 54 120M 0 0 1138k 0 0:03:19 0:01:48 0:01:31 517k 54 221M 54 120M 0 0 1135k 0 0:03:19 0:01:49 0:01:30 581k 54 221M 54 121M 0 0 1133k 0 0:03:20 0:01:50 0:01:30 680k 55 221M 55 122M 0 0 1132k 0 0:03:20 0:01:51 0:01:29 787k 55 221M 55 123M 0 0 1132k 0 0:03:20 0:01:52 0:01:28 901k 56 221M 56 125M 0 0 1134k 0 0:03:19 0:01:53 0:01:26 1047k 57 221M 57 126M 0 0 1134k 0 0:03:19 0:01:54 0:01:25 1120k 57 221M 57 127M 0 0 1136k 0 0:03:19 0:01:55 0:01:24 1206k 58 221M 58 129M 0 0 1138k 0 0:03:19 0:01:56 0:01:23 1271k 58 221M 58 130M 0 0 1141k 0 0:03:18 0:01:57 0:01:21 1336k 59 221M 59 131M 0 0 1142k 0 0:03:18 0:01:58 0:01:20 1318k 60 221M 60 133M 0 0 1143k 0 0:03:18 0:01:59 0:01:19 1352k 60 221M 60 133M 0 0 1141k 0 0:03:18 0:02:00 0:01:18 1268k 60 221M 60 134M 0 0 1140k  0 0:03:18 0:02:01 0:01:17 1183k 61 221M 61 135M 0 0 1139k 0 0:03:19 0:02:02 0:01:17 1101k 61 221M 61 136M 0 0 1138k 0 0:03:19 0:02:03 0:01:16 1045k 62 221M 62 137M 0 0 1137k 0 0:03:19 0:02:04 0:01:15 973k 62 221M 62 138M 0 0 1135k 0 0:03:19 0:02:05 0:01:14 974k 63 221M 63 139M 0 0 1134k 0 0:03:20 0:02:06 0:01:14 987k 63 221M 63 140M 0 0 1134k 0 0:03:19 0:02:07 0:01:12 1011k 63 221M 63 141M 0 0 1133k 0 0:03:20 0:02:08 0:01:12 1000k 64 221M 64 142M 0 0 1129k 0 0:03:20 0:02:09 0:01:11 940k 64 221M 64 143M 0 0 1126k 0 0:03:21 0:02:10 0:01:11 906k 64 221M 64 143M 0   0 1122k 0 0:03:22 0:02:11 0:01:11 830k 65 221M 65 144M 0 0 1118k 0 0:03:22 0:02:12 0:01:10 710k 65 221M 65 144M 0 0 1114k 0 0:03:23 0:02:13 0:01:10 629k 65 221M 65 145M 0 0 1110k 0 0:03:24 0:02:14 0:01:10 627k 65 221M 65 146M 0 0 1108k 0 0:03:24 0:02:15 0:01:09 638k 66 221M 66 147M 0 0 1106k 0 0:03:24 0:02:16 0:01:08 690k 66 221M 66 148M 0 0 1106k 0 0:03:25 0:02:17 0:01:08 782k 67 221M 67 148M 0 0 1104k 0 0:03:25 0:02:18 0:01:07 852k 67 221M 67 150M 0 0 1104k 0 0:03:25 0:02:19 0:01:06 944k 68 221M 68 151M 0 0 1105k 0 0:03:25 0:02:20 0:01:05 1022k 68 221M 68 152M 0 0 1106k 0 0:03:25 0:02:21 0:01:04 1098k 69 221M 69 153M 0 0 1109k 0 0:03:24 0:02:22 0:01:02 1185k 70 221M 70 155M 0 0 1110k  0 0:03:24 0:02:23 0:01:01 1279k 70 221M 70 156M 0 0 1112k 0 0:03:23 0:02:24 0:00:59 1340k 71 221M 71 157M 0 0 1111k 0 0:03:24 0:02:25 0:00:59 1283k 71 221M 71 158M 0 0 1108k 0 0:03:24 0:02:26 0:00:58 1155k 71 221M 71 158M 0 0 1104k 0 0:03:25 0:02:27 0:00:58 986k 71 221M 71 159M 0 0 1102k  0 0:03:25 0:02:28 0:00:57 855k 72 221M 72 160M 0 0 1100k 0 0:03:26 0:02:29 0:00:57 746k 72 221M 72 161M 0 0 1099k 0 0:03:26 0:02:30 0:00:56 768k 73 221M 73 162M 0 0 1100k 0 0:03:26 0:02:31 0:00:55 867k 73 221M 73 163M 0 0 1100k 0 0:03:26 0:02:32 0:00:54 958k 74 221M 74 164M 0 0 1100k 0 0:03:26 0:02:33 0:00:53 1041k 74 221M 74 165M 0 0 1101k 0 0:03:25 0:02:34 0:00:51 1129k 75 221M 75 166M 0 0 1101k 0 0:03:25 0:02:35 0:00:50 1146k 75 221M 75 167M 0 0 1101k 0 0:03:25 0:02:36 0:00:49 1156k 76 221M 76 169M 0 0 1101k 0 0:03:25 0:02:37 0:00:48 1151k 76 221M 76 170M 0 0 1101k 0 0:03:25 0:02:38 0:00:47 1137k 77 221M 77 171M 0 0 1100k 0 0:03:26 0:02:39 0:00:47 1083k 77 221M 77 172M 0 0 1100k 0 0:03:26 0:02:40 0:00:46 1084k 78 221M 78 172M 0 0 1099k 0 0:03:26 0:02:41 0:00:45 1024k 78 221M 78 173M 0 0 1097k 0 0:03:26 0:02:42 0:00:44 970k 78 221M 78 174M 0 0 1096k 0 0:03:26 0:02:43 0:00:43 930k 79 221M 79 175M 0 0 1095k 0 0:03:27 0:02:44 0:00:43 925k 79 221M 79 176M 0 0 1095k 0 0:03:27 0:02:45 0:00:42 908k 80 221M 80 177M 0 0 1093k 0 0:03:27 0:02:46 0:00:41 911k 80 221M 80 178M 0 0 1092k 0 0:03:27 0:02:47 0:00:40 918k 80 221M 80 179M 0 0 1091k 0 0:03:27 0:02:48 0:00:39 936k 81 221M 81 180M 0 0 1091k 0 0:03:27 0:02:49 0:00:38 949k 81 221M 81 181M 0 0 1090k 0 0:03:28 0:02:50 0:00:38 943k 82 221M 82 182M 0 0 1089k  0 0:03:28 0:02:51 0:00:37 943k 82 221M 82 182M 0 0 1088k  0 0:03:28 0:02:52 0:00:36 955k 83 221M 83 183M 0 0 1088k 0 0:03:28 0:02:53 0:00:35 973k 83 221M 83 184M 0 0 1087k 0 0:03:28 0:02:54 0:00:34 951k 83 221M 83 185M 0 0 1086k 0 0:03:28 0:02:55 0:00:33 955k 84 221M 84 186M 0 0 1084k 0 0:03:29 0:02:56 0:00:33 910k 84 221M 84 187M 0 0 1082k  0 0:03:29 0:02:57 0:00:32 881k 84 221M 84 187M 0 0 1080k 0 0:03:29 0:02:58 0:00:31 825k 85 221M 85 188M 0 0 1078k 0 0:03:30 0:02:59 0:00:31 775k 85 221M 85 189M 0 0 1076k 0 0:03:30 0:03:00 0:00:30 715k 85 221M 85 190M 0 0 1075k 0 0:03:31 0:03:01 0:00:30 747k 86 221M 86 190M 0 0 1073k 0 0:03:31 0:03:02 0:00:29 760k 86 221M 86 191M 0 0 1071k  0 0:03:31 0:03:03 0:00:28 743k 86 221M 86 192M 0 0 1069k  0 0:03:32 0:03:04 0:00:28 759k 87 221M 87 193M 0 0 1069k 0 0:03:32 0:03:05 0:00:27 806k 87 221M 87 194M 0 0 1069k 0 0:03:32 0:03:06 0:00:26 848k 88 221M 88 195M 0 0 1069k 0 0:03:32 0:03:07 0:00:25 933k 88 221M 88 196M 0 0 1071k  0 0:03:31 0:03:08 0:00:23 1071k 89 221M 89 198M 0 0 1073k 0 0:03:31 0:03:09 0:00:22 1223k 90 221M 90 199M 0 0 1077k 0 0:03:30 0:03:10 0:00:20 1369k 91 221M 91 202M 0 0 1082k 0 0:03:29 0:03:11 0:00:18 1589k 92 221M 92 204M 0 0 1092k 0 0:03:27 0:03:12 0:00:15 1949k 93 221M 93 207M 0 0 1099k 0 0:03:26 0:03:13 0:00:13 2151k 94 221M 94 208M 0 0 1102k 0 0:03:25 0:03:14 0:00:11 2183k 94 221M 94 210M 0 0 1104k 0 0:03:25 0:03:15 0:00:10 2166k 95 221M 95 211M 0 0 1106k 0 0:03:25 0:03:16 0:00:09 1997k 96 221M 96 212M 0 0 1105k 0 0:03:25 0:03:17 0:00:08 1610k 96 221M 96 214M 0 0 1106k 0 0:03:25 0:03:18 0:00:07 1377k 97 221M 97 215M 0 0 1107k 0 0:03:24 0:03:19 0:00:05 1316k 97 221M 97 216M 0 0 1109k  0 0:03:24 0:03:20 0:00:04 1291k 98 221M 98 218M 0 0 1112k 0 0:03:23 0:03:21 0:00:02 1356k 99 221M 99 220M 0 0 1115k 0 0:03:23 0:03:22 0:00:01 1487k 99 221M 99 220M 0 0 1114k 0 0:03:23 0:03:23 --:--:-- 1422k 99 221M 99 221M 0 0 1110k 0 0:03:24 0:03:24 --:--:-- 1228k 100 221M 100 221M 0 0 1109k 0 0:03:24 0:03:24 --:--:-- 1098k  ---> 8be1c92c94d9 Removing intermediate container 7bc0ae0a7208 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev1" '' ---> Running in 103f9cc2ffc6 ---> 30d8f16d8640 Removing intermediate container 103f9cc2ffc6 Successfully built 30d8f16d8640 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:36108/kubevirt/registry-disk-v1alpha:devel ---> 050f9523d521 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a862626adbbc Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Running in 0e4debee807f  % Total % Received % Xferd Average Speed Time Time Time Current  Dload Upload Total Spent Left Speed   0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 37.0M 0 27600 0 0 39884 0 0:16:12 --:--:-- 0:16:12 39884 16 37.0M 16 6416k 0 0 3801k 0 0:00:09 0:00:01 0:00:08 3799k 75 37.0M 75 27.9M 0 0 10.3M 0 0:00:03 0:00:02 0:00:01 10.3M 100 37.0M 100 37.0M 0 0 11.6M 0 0:00:03 0:00:03 --:--:-- 11.6M  ---> 90a8076006a9 Removing intermediate container 0e4debee807f Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev1" '' ---> Running in ad4945c1eb18 ---> fa5552714f41 Removing intermediate container ad4945c1eb18 Successfully built fa5552714f41 Sending build context to Docker daemon 34 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b0110ac54e8d Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 1394197f657f Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 6104617ebba9 Step 5/8 : USER 1001 ---> Using cache ---> fa0a18131006 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> abda6578e327 Removing intermediate container 68e709bf6fa1 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in edf6ddd71083 ---> 23f21f115ca4 Removing intermediate container edf6ddd71083 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "subresource-access-test" '' ---> Running in 12604d637230 ---> ee62cd3a2925 Removing intermediate container 12604d637230 Successfully built ee62cd3a2925 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b0110ac54e8d Step 3/9 : ENV container docker ---> Using cache ---> 09a0eb53efc4 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> d5c23db469cf Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 8e66c78dedc0 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> c95a7af438cb Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> d1bbb7a9fd1c Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 66ae1105f468 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev1" '' "winrmcli" '' ---> Running in d745f8a0b6f5 ---> 118815ac4c10 Removing intermediate container d745f8a0b6f5 Successfully built 118815ac4c10 hack/build-docker.sh push The push refers to a repository [localhost:36108/kubevirt/virt-controller] 9fb92f518871: Preparing e8218823dc22: Preparing 39bae602f753: Preparing e8218823dc22: Pushed 9fb92f518871: Pushed 39bae602f753: Pushed devel: digest: sha256:1bbbf996c8e9a73ed96ae462e65946926713a87380e37a556858a67692f3d0c2 size: 948 The push refers to a repository [localhost:36108/kubevirt/virt-launcher] 12ac506a11cd: Preparing f2a620ce74d8: Preparing f2a620ce74d8: Preparing d77b5f9353ab: Preparing cd1bebca6885: Preparing 6d769415a425: Preparing 9c4e9b3d7385: Preparing 54be34ae881b: Preparing 0a2c53ad21cd: Preparing 9c4e9b3d7385: Waiting d232139a2650: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing a1359dc556dd: Preparing d232139a2650: Waiting 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing 490c7c373332: Waiting 4b440db36f72: Waiting 39bae602f753: Waiting 12ac506a11cd: Pushed d77b5f9353ab: Pushed cd1bebca6885: Pushed f2a620ce74d8: Pushed 6d769415a425: Pushed 54be34ae881b: Pushed 0a2c53ad21cd: Pushed 530cc55618cd: Pushed 34fa414dfdf6: Pushed a1359dc556dd: Pushed 490c7c373332: Pushed 39bae602f753: Mounted from kubevirt/virt-controller d232139a2650: Pushed 9c4e9b3d7385: Pushed 4b440db36f72: Pushed devel: digest: sha256:6aa711f6641c15888c2d722eff0009637883a568095a17f2cbdcad3dec6d2db3 size: 3653 The push refers to a repository [localhost:36108/kubevirt/virt-handler] 530c129b5c3f: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher 530c129b5c3f: Pushed devel: digest: sha256:938ee4cee151a9ce96df838d281935c41307c43a428ba00c18652acb6c90d191 size: 740 The push refers to a repository [localhost:36108/kubevirt/virt-api] a92c64ca7de8: Preparing fb95d38c6fbd: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler fb95d38c6fbd: Pushed a92c64ca7de8: Pushed devel: digest: sha256:83a85a507d70e673609b04938ede57f00e7d49d8d1c657d7aad885033202babe size: 948 The push refers to a repository [localhost:36108/kubevirt/iscsi-demo-target-tgtd] 7d5ffdb95845: Preparing 67dca7d6b2ae: Preparing 8f131c2efff0: Preparing 8ae9c002a00c: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api 8f131c2efff0: Pushed 7d5ffdb95845: Pushed 67dca7d6b2ae: Pushed 8ae9c002a00c: Pushed devel: digest: sha256:964d54517cc2ce6ab77b7f1d18cdbf667164b3b979d909398ddaa663e31aee75 size: 1368 The push refers to a repository [localhost:36108/kubevirt/vm-killer] 54dfdb30c356: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/iscsi-demo-target-tgtd 54dfdb30c356: Pushed devel: digest: sha256:c82a4b09507ea1a8c8d6914bcd4924d17e1c0c97b33e11087772b352fe8d86a5 size: 740 The push refers to a repository [localhost:36108/kubevirt/registry-disk-v1alpha] c6915653c205: Preparing fa58fb7b9535: Preparing 6709b2da72b8: Preparing c6915653c205: Pushed fa58fb7b9535: Pushed 6709b2da72b8: Pushed devel: digest: sha256:b46f57e06d415fa24ca4c986787f4e8ece4818a851f062ef9913887739e4e26e size: 948 The push refers to a repository [localhost:36108/kubevirt/cirros-registry-disk-demo] 2c157e182d42: Preparing c6915653c205: Preparing fa58fb7b9535: Preparing 6709b2da72b8: Preparing fa58fb7b9535: Mounted from kubevirt/registry-disk-v1alpha c6915653c205: Mounted from kubevirt/registry-disk-v1alpha 6709b2da72b8: Mounted from kubevirt/registry-disk-v1alpha 2c157e182d42: Pushed devel: digest: sha256:c095c7c78b1b2a0f3dc300877783c9b9bd10b5709a45e694b5b63976d253f8fd size: 1160 The push refers to a repository [localhost:36108/kubevirt/fedora-cloud-registry-disk-demo] 436fcf4311ec: Preparing c6915653c205: Preparing fa58fb7b9535: Preparing 6709b2da72b8: Preparing c6915653c205: Mounted from kubevirt/cirros-registry-disk-demo fa58fb7b9535: Mounted from kubevirt/cirros-registry-disk-demo 6709b2da72b8: Mounted from kubevirt/cirros-registry-disk-demo 436fcf4311ec: Pushed devel: digest: sha256:37a3cdbc37a08decdc6c95b72b5fb367c146888a48cf2022f4b9523f5a2d362b size: 1161 The push refers to a repository [localhost:36108/kubevirt/alpine-registry-disk-demo] 34bcd3500bf5: Preparing c6915653c205: Preparing fa58fb7b9535: Preparing 6709b2da72b8: Preparing fa58fb7b9535: Mounted from kubevirt/fedora-cloud-registry-disk-demo 6709b2da72b8: Mounted from kubevirt/fedora-cloud-registry-disk-demo c6915653c205: Mounted from kubevirt/fedora-cloud-registry-disk-demo 34bcd3500bf5: Pushed devel: digest: sha256:aa922b69258026cb6c59fa2435765cb7806a5907f66944bcd5268c97cead6b7e size: 1160 The push refers to a repository [localhost:36108/kubevirt/subresource-access-test] fa54861ff93e: Preparing 7a13304598ad: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer 7a13304598ad: Pushed fa54861ff93e: Pushed devel: digest: sha256:a279cb6e8e3568a691586bdb518e40ef6953f2cd4d446f2eb3aefa174c2b1fb0 size: 948 The push refers to a repository [localhost:36108/kubevirt/winrmcli] e373097a073f: Preparing c4e78913515e: Preparing 25a5cce12702: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/subresource-access-test e373097a073f: Pushed 25a5cce12702: Pushed c4e78913515e: Pushed devel: digest: sha256:75ae164da2b6ff922a050e50efc91de0c08cc4ed8fd655fd99d83f948010ea1d size: 1165 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_NUM_NODES=2 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-dev1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-dev1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.5.1-alpha.2-50-gce2d63c ++ KUBEVIRT_VERSION=v0.5.1-alpha.2-50-gce2d63c + source cluster/k8s-1.9.3/provider.sh ++ set -e ++ image=k8s-1.9.3@sha256:265ccfeeb0352a87141d4f0f041fa8cc6409b82fe3456622f4c549ec1bfe65c0 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.9.3.sh ++ source hack/config-provider-k8s-1.9.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubectl +++ docker_prefix=localhost:36108/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vms --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p the server doesn't have a resource type "vms" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ wc -l ++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ cluster/k8s-1.9.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ cluster/k8s-1.9.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_NUM_NODES=2 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-dev1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-dev1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.5.1-alpha.2-50-gce2d63c ++ KUBEVIRT_VERSION=v0.5.1-alpha.2-50-gce2d63c + source cluster/k8s-1.9.3/provider.sh ++ set -e ++ image=k8s-1.9.3@sha256:265ccfeeb0352a87141d4f0f041fa8cc6409b82fe3456622f4c549ec1bfe65c0 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.9.3.sh ++ source hack/config-provider-k8s-1.9.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubectl +++ docker_prefix=localhost:36108/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-dev ]] + [[ k8s-1.10.3-dev =~ .*-dev ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R customresourcedefinition "offlinevirtualmachines.kubevirt.io" created serviceaccount "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver-auth-delegator" created rolebinding "kubevirt-apiserver" created role "kubevirt-apiserver" created clusterrole "kubevirt-apiserver" created clusterrole "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding "kubevirt-controller" created clusterrolebinding "kubevirt-controller-cluster-admin" created clusterrolebinding "kubevirt-privileged-cluster-admin" created clusterrole "kubevirt.io:admin" created clusterrole "kubevirt.io:edit" created clusterrole "kubevirt.io:view" created clusterrole "kubevirt.io:default" created clusterrolebinding "kubevirt.io:default" created customresourcedefinition "virtualmachinereplicasets.kubevirt.io" created service "virt-api" created deployment "virt-api" created service "virt-controller" created deployment "virt-controller" created daemonset "virt-handler" created customresourcedefinition "virtualmachines.kubevirt.io" created customresourcedefinition "virtualmachinepresets.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "iscsi-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "iscsi-disk-custom" created daemonset "iscsi-demo-target-tgtd" created serviceaccount "kubevirt-testing" created clusterrolebinding "kubevirt-testing-cluster-admin" created + '[' k8s-1.9.3 = vagrant-openshift ']' + [[ k8s-1.9.3 =~ os-3.9.0.* ]] + echo Done Done ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n 'virt-api-775dfb9789-fg95n 0/1 ContainerCreating 0 2s virt-controller-5f7c946cc4-jw7gn 0/1 ContainerCreating 0 2s virt-controller-5f7c946cc4-rtszq 0/1 ContainerCreating 0 2s virt-handler-2r52w 0/1 ContainerCreating 0 2s virt-handler-xff6c 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers iscsi-demo-target-tgtd-746j8 0/1 ContainerCreating 0 1s iscsi-demo-target-tgtd-zh984 0/1 ContainerCreating 0 1s virt-api-775dfb9789-fg95n 0/1 ContainerCreating 0 3s virt-controller-5f7c946cc4-jw7gn 0/1 ContainerCreating 0 3s virt-controller-5f7c946cc4-rtszq 0/1 ContainerCreating 0 3s virt-handler-2r52w 0/1 ContainerCreating 0 3s virt-handler-xff6c 0/1 ContainerCreating 0 3s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n 'iscsi-demo-target-tgtd-746j8 0/1 ContainerCreating 0 12s iscsi-demo-target-tgtd-zh984 0/1 ContainerCreating 0 12s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers iscsi-demo-target-tgtd-746j8 0/1 ContainerCreating 0 13s iscsi-demo-target-tgtd-zh984 0/1 ContainerCreating 0 13s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n 'iscsi-demo-target-tgtd-746j8 0/1 ContainerCreating 0 27s iscsi-demo-target-tgtd-zh984 0/1 ContainerCreating 0 27s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers iscsi-demo-target-tgtd-746j8 0/1 ContainerCreating 0 31s iscsi-demo-target-tgtd-zh984 0/1 ContainerCreating 0 31s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-746j8 false iscsi-demo-target-tgtd-zh984' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-746j8 false iscsi-demo-target-tgtd-zh984 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-746j8 false iscsi-demo-target-tgtd-zh984' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-746j8 false iscsi-demo-target-tgtd-zh984 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-746j8 false iscsi-demo-target-tgtd-zh984' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' false iscsi-demo-target-tgtd-746j8 false iscsi-demo-target-tgtd-zh984 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-746j8 false iscsi-demo-target-tgtd-zh984' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-746j8 false iscsi-demo-target-tgtd-zh984 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-746j8 false iscsi-demo-target-tgtd-zh984' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-746j8 false iscsi-demo-target-tgtd-zh984 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '/virt-controller/ && /true/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ wc -l + '[' 2 -lt 1 ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd-node01 1/1 Running 0 19m iscsi-demo-target-tgtd-746j8 1/1 Running 1 1m iscsi-demo-target-tgtd-zh984 1/1 Running 1 1m kube-apiserver-node01 1/1 Running 0 18m kube-controller-manager-node01 1/1 Running 0 18m kube-dns-6f4fd4bdf-g9zcz 3/3 Running 0 19m kube-flannel-ds-26spv 1/1 Running 1 19m kube-flannel-ds-7mclq 1/1 Running 1 19m kube-proxy-jsngt 1/1 Running 0 19m kube-proxy-x879n 1/1 Running 0 19m kube-scheduler-node01 1/1 Running 0 18m virt-api-775dfb9789-fg95n 1/1 Running 0 1m virt-controller-5f7c946cc4-jw7gn 1/1 Running 0 1m virt-controller-5f7c946cc4-rtszq 1/1 Running 0 1m virt-handler-2r52w 1/1 Running 0 1m virt-handler-xff6c 1/1 Running 0 1m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + [[ k8s-1.10.3-dev == windows ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:8b33043feeb10b27572d8053bdc7179a9496c83ded2be2221ae64b435ce6e0af go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1528210658 Will run 109 of 109 specs • ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vm [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:132 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.007 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vm [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:138 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:149 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:191 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:149 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:207 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:225 should succeed to start a vm /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:241 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:225 should succeed to stop a vm /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:249 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1268 ------------------------------ • ------------------------------ • [SLOW TEST:52.539 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:123 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:32.162 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:123 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:101.592 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:123 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:119.709 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:123 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:41.182 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:182 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:184 ------------------------------ • [SLOW TEST:44.287 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:232 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:234 ------------------------------ • [SLOW TEST:75.476 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:232 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:254 ------------------------------ • [SLOW TEST:129.897 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:42 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:122 With VM with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:314 should start vm multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:326 ------------------------------ • [SLOW TEST:38.088 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:82 ------------------------------ • [SLOW TEST:90.644 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:93 ------------------------------ • [SLOW TEST:42.652 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:118 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:119 ------------------------------ • [SLOW TEST:42.459 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:46 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:80 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vm_userdata_test.go:161 ------------------------------ • [SLOW TEST:44.684 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:47 VirtualMachine attached to the pod network /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:143 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VM /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••••••• ------------------------------ • [SLOW TEST:5.201 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:47 VirtualMachine attached to the pod network /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:143 with a subdomain and a headless service given /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:302 should be able to reach the vm via its unique fully qualified domain name /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:325 ------------------------------ • [SLOW TEST:33.058 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:47 VirtualMachine with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:357 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vm_networking_test.go:358 ------------------------------ • ------------------------------ • [SLOW TEST:6.175 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to three, to two and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:11.151 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:16.698 seconds] VirtualMachineReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ ••• ------------------------------ • [SLOW TEST:18.024 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:37 A new VM /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:48 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:49 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:50 ------------------------------ •• ------------------------------ • [SLOW TEST:20.476 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115 should update OfflineVirtualMachine once VMs are up /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:195 ------------------------------ • [SLOW TEST:10.679 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115 should remove VM once the OVM is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:204 ------------------------------ • ------------------------------ • [SLOW TEST:47.912 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115 should recreate VM if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:245 ------------------------------ • [SLOW TEST:76.479 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115 should recreate VM if the VM's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:265 ------------------------------ • [SLOW TEST:21.792 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115 should stop VM if running set to false /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:325 ------------------------------ • [SLOW TEST:213.742 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115 should start and stop VM multiple times /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:333 ------------------------------ • [SLOW TEST:67.163 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115 should not update the VM spec if Running /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:346 ------------------------------ • [SLOW TEST:220.675 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115 should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:387 ------------------------------ • [SLOW TEST:18.923 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:435 should start a VM once /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:436 ------------------------------ • [SLOW TEST:53.943 seconds] OfflineVirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:47 A valid OfflineVirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:435 should stop a VM once /root/go/src/kubevirt.io/kubevirt/tests/ovm_test.go:467 ------------------------------ • [SLOW TEST:10.665 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify only admin role has access only to kubevirt-config /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:42 ------------------------------ • [SLOW TEST:18.742 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:18.593 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an ovm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:18.672 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vm preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:18.655 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vm replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:43.320 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 ------------------------------ • [SLOW TEST:49.116 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 ------------------------------ • [SLOW TEST:36.707 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VM /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 ------------------------------ • [SLOW TEST:39.022 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VM /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ • Failure [268.016 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:37 A VM with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:56 should be shut down when the watchdog expires [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:57 Expected error: : 250000000000 expect: timer expired after 250 seconds not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:80 ------------------------------ STEP: Starting a VM level=info timestamp=2018-06-05T15:30:51.532737Z pos=utils.go:231 component=tests msg="Created virtual machine pod virt-launcher-testvmq2gdw-ld89x" level=info timestamp=2018-06-05T15:31:07.450631Z pos=utils.go:231 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmq2gdw-ld89x" level=info timestamp=2018-06-05T15:31:08.930757Z pos=utils.go:231 component=tests msg="VM defined." level=info timestamp=2018-06-05T15:31:08.972267Z pos=utils.go:231 component=tests msg="VM started." STEP: Expecting the VM console STEP: Killing the watchdog device • ------------------------------ • [SLOW TEST:17.650 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:65 ------------------------------ • [SLOW TEST:16.794 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:71 ------------------------------ •••• ------------------------------ • [SLOW TEST:37.289 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:159 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:22.303 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:159 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:16.180 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:186 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:187 should retry starting the VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:188 ------------------------------ • [SLOW TEST:17.631 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:186 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:187 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:218 ------------------------------ • [SLOW TEST:36.553 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:263 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:264 ------------------------------ • [SLOW TEST:24.107 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:286 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:287 ------------------------------ • [SLOW TEST:22.637 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:316 should indicate that a node is ready for vms /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:317 ------------------------------ • [SLOW TEST:132.005 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:347 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:385 ------------------------------ S [SKIPPING] [0.487 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:438 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:443 ------------------------------ S [SKIPPING] [0.206 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:438 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:443 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.152 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 VM Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:499 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:519 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:515 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.135 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Creating a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:59 VM Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:499 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:556 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:515 ------------------------------ • ------------------------------ • [SLOW TEST:18.202 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Delete a VM's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:615 should result in the VM moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:616 ------------------------------ • [SLOW TEST:20.936 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Delete a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:647 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:648 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:649 ------------------------------ • [SLOW TEST:23.638 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Delete a VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:647 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:672 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:673 ------------------------------ • [SLOW TEST:32.369 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Killed VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:724 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:725 ------------------------------ • [SLOW TEST:27.169 seconds] Vmlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:45 Killed VM /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:724 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmlifecycle_test.go:752 ------------------------------ volumedisk0 compute • [SLOW TEST:36.352 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:39 VM definition /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:50 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:51 should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:57 ------------------------------ • [SLOW TEST:36.932 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:39 New VM with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:110 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vm_configuration_test.go:133 ------------------------------ • ------------------------------ • [SLOW TEST:5.930 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 with correct permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:51 should be allowed to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:52 ------------------------------ • [SLOW TEST:5.033 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 Without permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:56 should not be able to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:57 ------------------------------ ••••••••••••• ------------------------------ • [SLOW TEST:98.368 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VM /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:17.738 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VM /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:22.531 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Summarizing 1 Failure: [Fail] Health Monitoring A VM with a watchdog device [It] should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vm_monitoring_test.go:80 Ran 99 of 109 Specs in 3009.334 seconds FAIL! -- 98 Passed | 1 Failed | 0 Pending | 10 Skipped --- FAIL: TestTests (3009.34s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh