+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release + [[ k8s-1.9.3-release =~ openshift-.* ]] + [[ k8s-1.9.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.9.3 + KUBEVIRT_PROVIDER=k8s-1.9.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/06/06 17:11:05 Waiting for host: 192.168.66.101:22 2018/06/06 17:11:08 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/06 17:11:16 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/06 17:11:21 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.9.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 30.509839 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --discovery-token-ca-cert-hash sha256:db82a8fa6c7e437bf9d67bb5c893d5c556494fc8c064bfd7eb3fe1540f709c88 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/06/06 17:12:05 Waiting for host: 192.168.66.102:22 2018/06/06 17:12:08 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/06 17:12:20 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 48668048 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep -v Ready + '[' -n '' ']' + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 49s v1.9.3 node02 Ready 20s v1.9.3 + make cluster-sync ./cluster/build.sh Building ... sha256:bfa4d0e4a1a6ecc8067d4e64dfd286bfa9c51c74b3def97ee58a46f3832bc088 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:bfa4d0e4a1a6ecc8067d4e64dfd286bfa9c51c74b3def97ee58a46f3832bc088 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 36.14 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 65d6d48cdb35 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> e1ade8663337 Step 5/8 : USER 1001 ---> Using cache ---> 2ce44d6f372a Step 6/8 : COPY virt-controller /virt-controller ---> Using cache ---> b12f41816fb6 Step 7/8 : ENTRYPOINT /virt-controller ---> Using cache ---> 2c481013bcab Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release2" '' "virt-controller" '' ---> Running in 49cce2fa46c0 ---> 4d57b4866cfd Removing intermediate container 49cce2fa46c0 Successfully built 4d57b4866cfd Sending build context to Docker daemon 38.08 MB Step 1/14 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/14 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> d4ddb23dff45 Step 3/14 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 142a2ba860cf Step 4/14 : COPY sock-connector /sock-connector ---> Using cache ---> 02569da61faa Step 5/14 : COPY sh.sh /sh.sh ---> Using cache ---> 47d4a51575e2 Step 6/14 : COPY virt-launcher /virt-launcher ---> Using cache ---> 3d63564cfe7b Step 7/14 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> Using cache ---> e6043731b43d Step 8/14 : RUN chmod 0640 /etc/sudoers.d/kubevirt ---> Using cache ---> f80ede28db92 Step 9/14 : RUN rm -f /libvirtd.sh ---> Using cache ---> 8a6ec6c8ac3c Step 10/14 : COPY libvirtd.sh /libvirtd.sh ---> Using cache ---> 4d4f496fb7e7 Step 11/14 : RUN chmod a+x /libvirtd.sh ---> Using cache ---> a4e5b32e8a53 Step 12/14 : COPY entrypoint.sh /entrypoint.sh ---> Using cache ---> 3f44fe2c5bf0 Step 13/14 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> 5814cd7c00dc Step 14/14 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release2" '' "virt-launcher" '' ---> Running in cc78175d4b50 ---> c73c2b2c75b3 Removing intermediate container cc78175d4b50 Successfully built c73c2b2c75b3 Sending build context to Docker daemon 36.7 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/5 : COPY virt-handler /virt-handler ---> Using cache ---> 5000b8f08277 Step 4/5 : ENTRYPOINT /virt-handler ---> Using cache ---> 61eb0e05ee8b Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release2" '' "virt-handler" '' ---> Running in d5fe17399fb8 ---> 238ce2ed45a9 Removing intermediate container d5fe17399fb8 Successfully built 238ce2ed45a9 Sending build context to Docker daemon 36.86 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 2eeb55f39191 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 56cea32a45d4 Step 5/8 : USER 1001 ---> Using cache ---> d121920c238b Step 6/8 : COPY virt-api /virt-api ---> Using cache ---> 39ba1cd8603c Step 7/8 : ENTRYPOINT /virt-api ---> Using cache ---> 72dc33ee4cbf Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release2" '' "virt-api" '' ---> Running in b009f0896e92 ---> 4315ef0a4054 Removing intermediate container b009f0896e92 Successfully built 4315ef0a4054 Sending build context to Docker daemon 6.656 kB Step 1/10 : FROM fedora:27 ---> 9110ae7f579f Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/10 : ENV container docker ---> Using cache ---> 32cab959eac8 Step 4/10 : RUN dnf -y install scsi-target-utils bzip2 e2fsprogs ---> Using cache ---> c2339817cfe0 Step 5/10 : RUN mkdir -p /images ---> Using cache ---> a19645b68794 Step 6/10 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/1-alpine.img ---> Using cache ---> 3f0fa7f50785 Step 7/10 : ADD run-tgt.sh / ---> Using cache ---> 35ac6b299ab7 Step 8/10 : EXPOSE 3260 ---> Using cache ---> 259db1618b21 Step 9/10 : CMD /run-tgt.sh ---> Using cache ---> 4c9f18dec05a Step 10/10 : LABEL "iscsi-demo-target-tgtd" '' "kubevirt-functional-tests-k8s-1.9.3-release2" '' ---> Running in aa42c1ffe446 ---> e8717e6d608d Removing intermediate container aa42c1ffe446 Successfully built e8717e6d608d Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/5 : ENV container docker ---> Using cache ---> 32cab959eac8 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 391fa00b27f9 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release2" '' "vm-killer" '' ---> Running in 88aaac383ccd ---> 26a38dfce301 Removing intermediate container 88aaac383ccd Successfully built 26a38dfce301 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> bcec0ae8107e Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 6696837acee7 Step 3/7 : ENV container docker ---> Using cache ---> 2dd2b1a02be6 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> dd3c4950b5c8 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> d221e0eb5770 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 6506e61a9f41 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release2" '' "registry-disk-v1alpha" '' ---> Running in 4a9f739f4985 ---> 8c07778c98eb Removing intermediate container 4a9f739f4985 Successfully built 8c07778c98eb Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32958/kubevirt/registry-disk-v1alpha:devel ---> 8c07778c98eb Step 2/4 : MAINTAINER "David Vossel" \ ---> Running in d683344b0d0b ---> 8a4ed46cbb2c Removing intermediate container d683344b0d0b Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Running in e546a8a9dc81  % Total % Received % Xferd Average Speed Time Time Time Current    Dload Upload Total Spent Left Speed 0  0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:--  0 0 12.1M 0 49152 0 0 62934 0 0:03:22 --:--:-- 0:03:22 62854 1 12.1M 1 224k 0 0 124k 0 0:01:39  0:00:01 0:01:38 124k 4 12.1M 4 512k 0 0 183k 0 0:01:07 0:00:02 0:01:05 183k 7 12.1M 7 960k 0 0 254k 0 0:00:48 0:00:03 0:00:45 254k 13 12.1M 13 1680k 0 0 351k 0 0:00:35 0:00:04 0:00:31 351k 22 12.1M 22 2832k 0 0 488k 0 0:00:25 0:00:05 0:00:20 555k 37 12.1M 37 4656k 0 0 685k 0 0:00:18 0:00:06 0:00:12 886k 61 12.1M 61 7648k 0 0 983k 0 0:00:12 0:00:07 0:00:05 1429k 97 12.1M 97 11.8M 0 0 1380k 0 0:00:08 0:00:08 --:--:-- 2230k 100 12.1M 100 12.1M 0 0 1407k 0 0:00:08 0:00:08 --:--:-- 2654k  ---> 5fdf3978af1e Removing intermediate container e546a8a9dc81 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.9.3-release2" '' ---> Running in 05274cab505f ---> 1ad29181fa30 Removing intermediate container 05274cab505f Successfully built 1ad29181fa30 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32958/kubevirt/registry-disk-v1alpha:devel ---> 8c07778c98eb Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Running in e429988f5c97 ---> 3fc5c497dd7c Removing intermediate container e429988f5c97 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Running in 39d6a2c0f1a5   % Total % Received % Xferd Average Speed Time Time  Time Current  Dload Upload Total Spent Left Speed  0 0 0  0 0 0 0  0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:--  0  0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 221M 0 2483 0 0 1110 0 58:08:21 0:00:02 58:08:19 13949 0 221M 0 282k 0 0 89626 0 0:43:12 0:00:03 0:43:09 240k 0 221M 0 829k 0 0 194k 0 0:19:26 0:00:04 0:19:22 375k 0 221M 0 1518k 0 0 288k 0  0:13:05 0:00:05 0:13:00 474k 1 221M 1 2435k 0 0 390k 0 0:09:40 0:00:06 0:09:34 583k 1 221M 1 3411k 0 0 469k 0 0:08:03 0:00:07 0:07:56 677k 1 221M 1 3936k 0 0 474k 0 0:07:57 0:00:08 0:07:49 722k 1 221M 1 4427k 0 0  478k 0 0:07:53 0:00:09 0:07:44 722k 2 221M 2 5249k 0 0 511k 0 0:07:23  0:00:10 0:07:13 745k 2 221M 2 6251k 0 0 553k 0 0:06:49 0:00:11 0:06:38 754k 3 221M 3 7257k 0 0 590k  0 0:06:24 0:00:12 0:06:12 765k 3 221M 3 8635k 0 0 652k 0 0:05:47 0:00:13 0:05:34 949k 4 221M 4 9511k  0 0 666k 0 0:05:40 0:00:14 0:05:26 1010k 4 221M 4 10.0M 0 0 672k 0 0:05:37 0:00:15 0:05:22 999k 4 221M 4 10.9M 0 0 690k  0 0:05:28 0:00:16 0:05:12 997k 5 221M 5 11.9M 0 0 709k 0 0:05:19 0:00:17 0:05:02 1006k 6 221M 6 13.3M 0 0 748k 0 0:05:03 0:00:18 0:04:45 1003k 6 221M 6 15.0M  0    0  800k  0 0:04:43 0:00:19 0:04:24 1182k 7 221M 7 16.0M 0 0 809k 0 0:04:40 0:00:20 0:04:20 1233k 7 221M 7 16.1M 0 0 779k 0 0:04:51 0:00:21 0:04:30 1070k 7 221M 7 16.4M 0 0 757k 0 0:04:59 0:00:22 0:04:37 920k 7 221M 7 16.8M 0 0 743k  0 0:05:05 0:00:23 0:04:42 724k 7 221M 7 17.0M 0 0 718k 0 0:05:15 0:00:24  0:04:51 404k 7 221M 7 17.3M 0 0 701k 0 0:05:23 0:00:25 0:04:58 266k 7 221M 7 17.6M 0 0 690k 0 0:05:28 0:00:26 0:05:02 311k 8 221M 8 18.3M 0 0 689k 0  0:05:29 0:00:27 0:05:02 380k 8 221M 8 19.2M 0 0 696k  0 0:05:25 0:00:28 0:04:57 477k 9 221M 9 20.1M 0 0 704k 0 0:05:22 0:00:29 0:04:53  633k   9 221M 9 21.2M  0 0 719k 0  0:05:15 0:00:30 0:04:45 806k 10 221M 10 22.7M 0 0 743k 0 0:05:05 0:00:31 0:04:34 1020k 10 221M 10 24.3M 0 0 773k  0 0:04:53 0:00:32 0:04:21 1236k 11 221M 11 26.2M 0 0 808k 0 0:04:40 0:00:33 0:04:07 1440k 12 221M 12 27.3M 0 0 817k 0 0:04:37 0:00:34 0:04:03 1487k 12 221M 12 28.1M 0 0 817k 0 0:04:37 0:00:35 0:04:02 1410k 13 221M 13 28.8M 0 0 813k  0 0:04:39 0:00:36 0:04:03 1246k 13 221M 13 29.1M 0 0 801k 0 0:04:43 0:00:37 0:04:06 976k 13 221M 13 29.6M 0 0 793k 0 0:04:45 0:00:38 0:04:07 696k 13 221M 13 30.0M 0 0 784k  0  0:04:49 0:00:39 0:04:10  556k 13 221M 13 30.5M 0 0 776k 0 0:04:52 0:00:40 0:04:12 486k 13 221M 13 30.9M 0 0 769k 0 0:04:54 0:00:41 0:04:13 447k 14 221M 14 31.6M 0 0 766k  0 0:04:56 0:00:42 0:04:14 510k 14 221M 14 32.5M  0  0  770k  0  0:04:54 0:00:43 0:04:11 592k 14 221M 14 33.1M 0 0 766k 0 0:04:56 0:00:44 0:04:12 622k 15 221M 15 33.6M 0 0 761k 0 0:04:57 0:00:45  0:04:12 638k 15 221M 15 34.1M 0 0 756k 0 0:04:59 0:00:46 0:04:13 653k 15 221M 15 34.9M 0 0 757k 0 0:04:59 0:00:47 0:04:12 678k 15 221M 15 35.3M 0 0 750k 0 0:05:02  0:00:48 0:04:14 578k 16 221M 16 35.7M 0 0 743k 0 0:05:05 0:00:49 0:04:16 542k 16 221M 16 36.3M 0 0 741k 0 0:05:05 0:00:50 0:04:15 559k 16 221M 16 37.2M 0 0 743k 0 0:05:04 0:00:51 0:04:13 625k 17 221M 17 38.2M 0 0 748k 0 0:05:03 0:00:52 0:04:11 667k 17 221M 17 38.9M 0 0 748k 0 0:05:03 0:00:53 0:04:10 724k 17 221M 17 39.0M 0  0 736k  0 0:05:07 0:00:54 0:04:13 671k 17 221M 17 39.3M 0 0 728k 0  0:05:11 0:00:55 0:04:16  602k 17 221M 17 39.7M 0 0 724k  0 0:05:13 0:00:56 0:04:17 519k 18 221M 18 40.4M 0 0 724k 0 0:05:13 0:00:57 0:04:16 466k 18 221M 18 41.3M  0 0 727k 0 0:05:11 0:00:58 0:04:13  507k 18 221M 18 42.0M 0 0 725k 0  0:05:12 0:00:59 0:04:13 605k 19 221M 19 42.6M 0 0 724k 0 0:05:13 0:01:00 0:04:13 677k 19 221M 19 43.4M 0 0 727k 0 0:05:12  0:01:01 0:04:11 761k 20 221M 20 44.4M 0 0 731k  0 0:05:10 0:01:02 0:04:08 811k 20 221M 20 45.7M 0 0 741k 0  0:05:06 0:01:03 0:04:03  900k 21 221M 21 47.2M 0 0 752k 0  0:05:01 0:01:04 0:03:57 1075k 21 221M 21 48.7M 0 0 764k 0 0:04:56 0:01:05 0:03:51 1245k 22 221M 22 49.3M 0 0 763k 0 0:04:57 0:01:06 0:03:51 1209k 22 221M 22 49.8M 0 0 758k 0 0:04:59 0:01:07 0:03:52 1102k 22 221M 22 50.4M 0 0 757k 0 0:04:59 0:01:08 0:03:51 968k 23 221M 23 51.3M 0 0 759k  0 0:04:58 0:01:09 0:03:49 838k 23 221M 23 51.8M 0 0 756k 0  0:05:00 0:01:10 0:03:50 646k 23 221M 23 52.6M 0  0 756k  0 0:05:00 0:01:11  0:03:49 659k 24 221M 24 53.6M 0 0 759k 0 0:04:58 0:01:12 0:03:46 764k 24 221M 24 54.5M 0 0 762k  0 0:04:57 0:01:13 0:03:44 829k 25 221M 25 55.9M 0 0 771k 0 0:04:53 0:01:14 0:03:39 948k 25 221M 25 56.9M 0 0 775k 0 0:04:52 0:01:15 0:03:37 1046k 26 221M 26 57.7M 0 0 775k 0 0:04:52 0:01:16  0:03:36 1054k 26 221M 26 58.7M 0 0 777k 0 0:04:51 0:01:17 0:03:34 1047k 26 221M 26 59.4M 0 0 777k 0 0:04:51 0:01:18 0:03:33 991k 27 221M 27 60.2M 0 0 778k 0 0:04:51 0:01:19 0:03:32 872k 27 221M 27 60.8M 0 0 776k 0 0:04:52 0:01:20 0:03:32  794k 27 221M 27 61.1M 0 0 770k 0 0:04:54 0:01:21 0:03:33 698k 27 221M 27 61.5M 0 0 765k 0 0:04:56 0:01:22 0:03:34 579k 27 221M 27 62.0M 0 0 762k 0 0:04:57 0:01:23 0:03:34 533k 28 221M 28 62.6M 0 0 761k 0 0:04:57 0:01:24 0:03:33 496k 28 221M 28 63.5M 0 0 763k 0 0:04:57 0:01:25 0:03:32 551k 28 221M 28 64.0M 0 0 760k 0 0:04:58 0:01:26 0:03:32 587k 29 221M 29 64.7M 0 0 759k 0  0:04:58 0:01:27 0:03:31 655k 29 221M 29 65.4M 0 0 759k 0 0:04:58 0:01:28 0:03:30 711k 29 221M 29 65.7M 0 0 754k 0  0:05:00 0:01:29 0:03:31 631k 29 221M 29 66.1M 0 0 750k 0 0:05:02 0:01:30 0:03:32 537k 30 221M 30 66.8M 0 0 749k 0 0:05:02 0:01:31 0:03:31 571k 30 221M 30 67.7M 0 0 752k 0 0:05:01 0:01:32 0:03:29 624k 30 221M 30 68.5M 0 0 752k 0 0:05:01 0:01:33 0:03:28 627k 31 221M 31 69.2M 0 0 752k 0 0:05:01  0:01:34 0:03:27 716k 31 221M 31 69.7M 0 0 749k 0 0:05:02 0:01:35 0:03:27 733k 31 221M 31 70.3M 0 0 748k 0 0:05:03 0:01:36 0:03:27 728k 32 221M 32 71.2M 0 0 750k  0  0:05:02 0:01:37 0:03:25 712k 32 221M 32 71.8M 0 0 748k 0 0:05:03 0:01:38 0:03:25 674k 32 221M 32 72.5M 0 0 748k 0  0:05:02 0:01:39 0:03:23 684k 33 221M 33 73.3M 0 0 749k  0 0:05:02 0:01:40 0:03:22 731k 33 221M 33 73.8M 0 0 746k 0 0:05:03 0:01:41 0:03:22 706k 33 221M 33 74.6M 0 0 747k 0 0:05:03 0:01:42 0:03:21 698k 33 221M 33 75.1M 0 0 745k 0 0:05:04 0:01:43 0:03:21 674k 34 221M 34 75.8M 0 0 744k 0 0:05:04 0:01:44 0:03:20 665k 34 221M 34 76.3M 0 0 742k 0 0:05:05 0:01:45 0:03:20 617k 34 221M 34 77.0M 0 0 742k 0 0:05:05 0:01:46 0:03:19 654k 35 221M 35 77.7M 0 0 741k 0 0:05:05 0:01:47 0:03:18 627k 35 221M 35 78.3M 0 0 741k 0 0:05:06 0:01:48 0:03:18 655k 35 221M 35 78.8M 0 0 739k 0 0:05:06 0:01:49 0:03:17 616k 35 221M 35 79.5M 0 0 738k  0 0:05:07 0:01:50 0:03:17 654k 36 221M 36 80.4M 0 0 740k 0 0:05:06 0:01:51 0:03:15 699k 36 221M 36 80.9M 0 0 737k 0 0:05:07 0:01:52 0:03:15 651k 36 221M 36 81.6M  0 0 737k 0 0:05:07 0:01:53 0:03:14 669k 37 221M 37 82.6M 0 0 740k 0 0:05:06 0:01:54  0:03:12 766k 37 221M 37 83.4M 0 0 741k 0 0:05:05 0:01:55 0:03:10 804k 38 221M 38 84.8M 0 0 747k 0  0:05:03 0:01:56 0:03:07 898k 38 221M 38 86.3M 0 0 754k  0 0:05:00 0:01:57  0:03:03 1115k 39 221M 39 88.0M 0 0 762k 0 0:04:57 0:01:58 0:02:59 1338k 40 221M 40 90.0M 0 0 773k 0 0:04:53  0:01:59 0:02:54 1536k 41 221M 41 91.1M 0 0 776k  0 0:04:52 0:02:00 0:02:52 1571k 41 221M 41 92.2M 0 0 778k 0 0:04:51 0:02:01 0:02:50 1508k 42 221M 42 93.6M 0 0 784k 0 0:04:49 0:02:02 0:02:47 1496k 43 221M 43 95.3M 0 0 791k  0 0:04:46 0:02:03 0:02:43 1463k 43 221M 43 96.4M 0 0 795k 0 0:04:45 0:02:04  0:02:41 1307k 43 221M 43 97.4M 0 0 796k 0 0:04:44 0:02:05 0:02:39 1283k 44 221M 44 98.6M 0 0 799k  0 0:04:43 0:02:06 0:02:37 1310k 44 221M 44 99.6M 0 0 802k 0 0:04:42  0:02:07 0:02:35 1235k 45 221M 45 100M 0 0 799k 0 0:04:43 0:02:08 0:02:35 1001k 45 221M 45 100M  0 0 798k  0 0:04:44  0:02:09 0:02:35 892k 45 221M 45 101M 0 0 796k 0 0:04:44 0:02:10 0:02:34 803k 45 221M 45 101M 0 0 794k 0 0:04:45  0:02:11 0:02:34 661k 46 221M 46 102M 0 0 795k 0 0:04:45 0:02:12 0:02:33 624k 46 221M 46 103M 0 0 796k  0 0:04:44 0:02:13 0:02:31 713k 47 221M 47 104M 0 0 799k 0 0:04:43 0:02:14 0:02:29 805k 47 221M 47 106M 0 0 804k 0 0:04:41 0:02:15 0:02:26 1009k 48 221M 48 108M 0 0 811k 0 0:04:39 0:02:16 0:02:23 1259k 49 221M 49 109M 0 0 816k 0 0:04:37 0:02:17 0:02:20 1389k 49 221M 49 110M 0 0 817k 0 0:04:37 0:02:18 0:02:19 1372k 50 221M 50 111M 0 0 818k 0 0:04:37 0:02:19 0:02:18 1334k 50 221M 50 112M 0 0 818k 0 0:04:37 0:02:20 0:02:17 1182k 50 221M 50 112M 0 0 817k 0 0:04:37 0:02:21 0:02:16  973k 51 221M 51 113M 0 0 815k 0 0:04:38 0:02:22  0:02:16 783k 51 221M 51 114M 0 0 816k 0 0:04:37 0:02:23 0:02:14 779k 51 221M 51 115M 0 0 817k 0 0:04:37 0:02:24 0:02:13 790k 52 221M 52 116M 0 0 818k 0 0:04:37 0:02:25 0:02:12 841k 53 221M 53 117M 0 0 823k  0 0:04:35 0:02:26  0:02:09  984k 53 221M 53 119M 0 0 828k 0 0:04:33 0:02:27  0:02:06 1201k 54 221M 54 121M 0 0 835k 0 0:04:31 0:02:28 0:02:03 1396k 55 221M 55 122M 0 0 842k 0 0:04:29 0:02:29 0:02:00 1573k 56 221M 56 124M 0 0 849k 0 0:04:27 0:02:30 0:01:57 1744k  56 221M  56 125M 0  0 850k 0 0:04:26 0:02:31  0:01:55 1665k 57 221M 57 126M 0 0 851k 0 0:04:26 0:02:32 0:01:54 1509k 57 221M 57 127M 0 0 854k 0 0:04:25 0:02:33 0:01:52 1395k 58 221M 58 128M 0 0 853k 0 0:04:25 0:02:34 0:01:51 1179k 58 221M 58 128M 0 0 849k 0  0:04:27 0:02:35 0:01:52 850k 58 221M 58 129M 0 0 846k 0 0:04:28 0:02:36 0:01:52 703k 58 221M 58 129M 0 0 844k 0 0:04:28 0:02:37 0:01:51 625k 58 221M 58 130M 0 0 843k 0 0:04:28 0:02:38 0:01:50 528k 59 221M 59 131M 0 0 844k 0 0:04:28 0:02:39 0:01:49 556k 59 221M 59 132M 0 0 846k  0 0:04:28 0:02:40 0:01:48 731k 60 221M 60 133M 0 0 849k  0 0:04:27 0:02:41 0:01:46  946k 61 221M 61 135M 0 0 853k 0 0:04:25 0:02:42 0:01:43 1162k 61 221M 61 137M 0 0 860k   0 0:04:23 0:02:43 0:01:40 1391k 62 221M 62 139M 0 0 866k 0 0:04:21 0:02:44 0:01:37 1583k 63 221M 63 140M 0 0 872k 0 0:04:20 0:02:45 0:01:35 1703k 64 221M 64 142M 0 0 875k 0 0:04:19 0:02:46 0:01:33 1702k 64 221M 64 143M 0 0 878k 0 0:04:18 0:02:47 0:01:31 1669k 65 221M 65 145M 0 0 883k 0 0:04:16 0:02:48 0:01:28 1635k 66 221M 66 146M 0 0 889k 0 0:04:15 0:02:49 0:01:26 1615k 66 221M 66 148M 0 0 892k 0 0:04:14 0:02:50 0:01:24 1556k 67 221M 67 149M 0 0 893k 0 0:04:14 0:02:51 0:01:23 1488k 68 221M 68 150M 0 0 896k  0 0:04:13 0:02:52 0:01:21 1492k 68 221M 68 152M 0 0 899k 0 0:04:12 0:02:53 0:01:19 1448k 69 221M 69 154M 0 0 905k 0 0:04:10 0:02:54 0:01:16 1457k 69 221M 69 155M 0 0 905k 0 0:04:10 0:02:55 0:01:15 1370k 70 221M 70 155M 0 0 906k 0  0:04:10  0:02:56 0:01:14 1355k 70 221M 70 156M 0 0 906k 0 0:04:10 0:02:57 0:01:13 1244k 71 221M 71 157M 0 0 905k  0 0:04:10 0:02:58 0:01:12 1094k 71 221M 71 158M 0 0 904k 0 0:04:10 0:02:59 0:01:11 886k 71 221M 71 158M 0  0 903k  0 0:04:11 0:03:00 0:01:11 807k 71 221M 71 159M 0 0 901k 0 0:04:11 0:03:01  0:01:10 719k 72 221M 72 160M 0 0 900k 0 0:04:11 0:03:02 0:01:09 700k 72 221M 72 161M 0  0 900k  0 0:04:11 0:03:03 0:01:08 728k 73 221M 73 162M 0 0 901k 0 0:04:11 0:03:04 0:01:07 783k 73 221M 73 163M 0 0 904k 0 0:04:10  0:03:05 0:01:05 952k 74 221M 74 165M 0 0 909k 0 0:04:09 0:03:06 0:01:03 1205k 75 221M 75 167M 0 0 914k 0 0:04:08 0:03:07 0:01:01 1426k 76 221M 76 168M 0 0 917k 0  0:04:07 0:03:08 0:00:59 1520k 76 221M 76 169M 0 0 917k 0 0:04:07 0:03:09  0:00:58 1499k 77 221M 77 170M 0 0 919k 0 0:04:06 0:03:10 0:00:56 1477k 77 221M 77 171M 0 0 918k  0 0:04:06 0:03:11 0:00:55 1281k 77 221M 77 172M 0 0 919k 0 0:04:06 0:03:12 0:00:54 1095k 78 221M 78 173M 0 0 919k 0 0:04:06  0:03:13 0:00:53 1011k 78 221M 78 173M 0 0 916k 0 0:04:07 0:03:14  0:00:53 872k 78 221M 78 174M 0 0 913k 0 0:04:08 0:03:15 0:00:53 691k 78 221M 78 174M 0 0 912k 0 0:04:08 0:03:16 0:00:52 677k 79 221M 79 175M 0 0 909k  0 0:04:09 0:03:17 0:00:52 554k 79 221M 79 175M 0 0 907k 0 0:04:09 0:03:18 0:00:51 462k 79 221M 79 176M 0 0 907k 0 0:04:10 0:03:19 0:00:51 557k 80 221M 80 177M 0 0 907k  0 0:04:10  0:03:20 0:00:50 658k 80 221M 80 178M 0 0 907k 0 0:04:09 0:03:21 0:00:48 709k 81 221M 81 179M 0 0 910k 0 0:04:09 0:03:22 0:00:47 914k 81 221M 81 181M 0 0 913k 0 0:04:08 0:03:23 0:00:45 1149k 82 221M 82 182M 0 0 916k 0 0:04:07 0:03:24 0:00:43 1282k 82 221M 82 183M 0 0 914k 0 0:04:07 0:03:25 0:00:42 1217k 83 221M 83 184M 0 0 915k 0 0:04:07 0:03:26 0:00:41 1212k 83 221M 83 185M 0 0 914k  0 0:04:08  0:03:27 0:00:41 1100k 83 221M 83 185M 0 0 913k 0 0:04:08  0:03:28 0:00:40 899k 84 221M 84 186M 0 0 913k 0 0:04:08 0:03:29 0:00:39 802k 84 221M 84 187M 0 0 913k 0 0:04:08 0:03:30 0:00:38 862k 85 221M 85 188M 0 0 915k 0 0:04:07 0:03:31 0:00:36 945k 85 221M 85 190M 0 0 919k 0 0:04:06 0:03:32 0:00:34 1105k 86 221M 86 191M 0 0 919k 0 0:04:06 0:03:33 0:00:33 1162k 86 221M 86 192M 0 0 919k 0 0:04:06 0:03:34 0:00:32 1172k 87 221M 87 192M 0 0 917k 0 0:04:07 0:03:35 0:00:32 1067k 87 221M 87 193M 0 0 914k 0 0:04:08 0:03:36 0:00:32 854k 87 221M 87 193M 0 0 912k 0 0:04:08 0:03:37 0:00:31 627k 87 221M 87 193M 0 0 909k 0 0:04:09 0:03:38 0:00:31 501k 87 221M 87 194M  0 0 907k 0 0:04:09  0:03:39 0:00:30 402k 87 221M 87 194M 0 0 905k 0 0:04:10 0:03:40 0:00:30 418k 88 221M 88 195M 0 0 905k 0 0:04:10 0:03:41 0:00:29 499k 88 221M 88 196M 0 0 905k 0  0:04:10 0:03:42 0:00:28 601k 89 221M 89 197M 0 0 905k 0 0:04:10 0:03:43 0:00:27 713k 89 221M 89 198M 0 0 904k 0 0:04:10 0:03:44 0:00:26 761k 89 221M 89 199M 0 0 904k 0 0:04:10 0:03:45 0:00:25 859k 90 221M 90 199M 0 0 904k 0 0:04:10 0:03:46 0:00:24  897k 90 221M 90 201M 0 0 907k 0 0:04:10 0:03:47 0:00:23 980k 91 221M 91 202M 0 0 907k 0 0:04:09 0:03:48 0:00:21 1015k 91 221M 91 203M 0 0 907k 0 0:04:09 0:03:49 0:00:20 1036k 92 221M 92 204M 0 0 908k 0 0:04:09 0:03:50 0:00:19 1049k 92 221M 92 205M 0 0 908k 0 0:04:09 0:03:51 0:00:18 1080k 92 221M 92 206M 0 0 908k 0 0:04:09 0:03:52  0:00:17 958k 93 221M 93 206M 0 0 908k 0 0:04:09 0:03:53 0:00:16 923k 93 221M 93 207M 0 0 908k 0 0:04:09 0:03:54 0:00:15 952k 94 221M 94 209M 0 0 910k 0 0:04:09 0:03:55 0:00:14 1022k 95 221M 95  210M  0 0 913k  0  0:04:08 0:03:56 0:00:12 1132k 95 221M 95 211M 0 0 913k 0 0:04:08 0:03:57 0:00:11 1136k 95 221M 95 212M 0 0 912k 0 0:04:08 0:03:58 0:00:10 1133k 96 221M 96 213M 0 0 913k 0 0:04:08 0:03:59 0:00:09 1167k 97 221M 97 214M 0 0 916k  0 0:04:07 0:04:00  0:00:07 1187k 97 221M 97 215M 0 0 915k 0 0:04:07 0:04:01 0:00:06 1007k 97 221M 97 216M 0 0 915k 0 0:04:07 0:04:02 0:00:05 1031k 98 221M 98 217M 0    0 916k 0 0:04:07 0:04:03 0:00:04 1065k 98 221M 98 218M 0 0 918k 0 0:04:07 0:04:04 0:00:03 1110k 99 221M 99 220M 0 0 920k 0 0:04:06 0:04:05 0:00:01 1122k 99 221M 99 221M 0 0 920k 0 0:04:06 0:04:06 --:--:-- 1162k 100 221M 100 221M 0 0 920k  0 0:04:06 0:04:06 --:--:-- 1227k  ---> e4cdb82ba7cd Removing intermediate container 39d6a2c0f1a5 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.9.3-release2" '' ---> Running in a0c57fa5a20e ---> cfcf781e68ca Removing intermediate container a0c57fa5a20e Successfully built cfcf781e68ca Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32958/kubevirt/registry-disk-v1alpha:devel ---> 8c07778c98eb Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3fc5c497dd7c Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Running in 183bf6e93b41   % Total % Received % Xferd Average Speed Time  Time Time Current      Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 37.0M 0 194k 0 0 62160 0 0:10:24 0:00:03 0:10:21 62141 8 37.0M 8 3252k 0 0 774k 0 0:00:48 0:00:04 0:00:44 774k 21 37.0M 21 8188k 0 0 1575k 0 0:00:24 0:00:05 0:00:19 1656k 41 37.0M 41 15.2M 0 0 2516k 0 0:00:15 0:00:06 0:00:09 3329k 70 37.0M 70 26.0M 0 0 3700k 0  0:00:10 0:00:07 0:00:03 5687k 100 37.0M 100 37.0M 0 0 4785k  0 0:00:07 0:00:07 --:--:-- 7987k  ---> 484df12cb42a Removing intermediate container 183bf6e93b41 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.9.3-release2" '' ---> Running in f7485b1ee787 ---> eb5918125f09 Removing intermediate container f7485b1ee787 Successfully built eb5918125f09 Sending build context to Docker daemon 33.97 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 6e6e1b7931e0 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 9d27e69a25f2 Step 5/8 : USER 1001 ---> Using cache ---> 1760a8e197af Step 6/8 : COPY subresource-access-test /subresource-access-test ---> Using cache ---> 3dd4ed2570e5 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Using cache ---> 1da01dca348a Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release2" '' "subresource-access-test" '' ---> Running in 8a510d64318a ---> fd6c6dcbdac3 Removing intermediate container 8a510d64318a Successfully built fd6c6dcbdac3 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> dde0df1b6fe4 Step 3/9 : ENV container docker ---> Using cache ---> 32cab959eac8 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 8e034c77f534 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 28ec1d482013 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> db78d0286f58 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 7ebe54e98be4 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> a3b04c1816f5 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release2" '' "winrmcli" '' ---> Running in f937be3210dc ---> 6acb6545bc45 Removing intermediate container f937be3210dc Successfully built 6acb6545bc45 hack/build-docker.sh push The push refers to a repository [localhost:32958/kubevirt/virt-controller] d560b0ca0a7c: Preparing 52069b1f5033: Preparing 39bae602f753: Preparing 52069b1f5033: Pushed d560b0ca0a7c: Pushed 39bae602f753: Pushed devel: digest: sha256:4530e8f297062e9a36473d4df05c933fee45147daa7232db84349861db653a84 size: 948 The push refers to a repository [localhost:32958/kubevirt/virt-launcher] 3ad107f998ec: Preparing 16664cb47942: Preparing 16664cb47942: Preparing 2d8b668de245: Preparing 44419a6db624: Preparing 36b9eaa9cf44: Preparing 670b9ea1bde7: Preparing 4ebc38848be0: Preparing b9fd8c21001d: Preparing 4d2f0529ab56: Preparing 530cc55618cd: Preparing 4ebc38848be0: Waiting 4d2f0529ab56: Waiting 530cc55618cd: Waiting 34fa414dfdf6: Preparing a1359dc556dd: Preparing 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing 490c7c373332: Waiting 4b440db36f72: Waiting 16664cb47942: Pushed 36b9eaa9cf44: Pushed 2d8b668de245: Pushed 44419a6db624: Pushed 3ad107f998ec: Pushed 4ebc38848be0: Pushed b9fd8c21001d: Pushed 530cc55618cd: Pushed 34fa414dfdf6: Pushed 670b9ea1bde7: Pushed 4d2f0529ab56: Pushed a1359dc556dd: Pushed 39bae602f753: Mounted from kubevirt/virt-controller 490c7c373332: Pushed 4b440db36f72: Pushed devel: digest: sha256:1aaea4f2500782b2762b44d82f84db3850cafc068ebf5507b471e06685ba0dc9 size: 3653 The push refers to a repository [localhost:32958/kubevirt/virt-handler] b450e179bd1c: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher b450e179bd1c: Pushed devel: digest: sha256:5284d1e58e72526a7c86e484d87172210aca5aecf87f467add05a11892f67a73 size: 740 The push refers to a repository [localhost:32958/kubevirt/virt-api] 0968adf1f150: Preparing 86b4b25303b4: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler 86b4b25303b4: Pushed 0968adf1f150: Pushed devel: digest: sha256:ec8b297d7ac782fc74df9381afdaee93a345597073fbf5b97c3ea5e2c6278ee4 size: 948 The push refers to a repository [localhost:32958/kubevirt/iscsi-demo-target-tgtd] 80220be9fed7: Preparing 89fef61f2c06: Preparing b18a27986676: Preparing db8a56c06e31: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api b18a27986676: Pushed 80220be9fed7: Pushed 89fef61f2c06: Pushed db8a56c06e31: Pushed devel: digest: sha256:d7c0f188a3d9ed45f0e54222f1aa5c0f8ca439c4617990317c7249c693904707 size: 1368 The push refers to a repository [localhost:32958/kubevirt/vm-killer] 040d3361950b: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/iscsi-demo-target-tgtd 040d3361950b: Pushed devel: digest: sha256:c1bc2de82119cf0675db1c4c3d1ec0fa06a180d62a5a3c14db6c6a047eef6e1d size: 740 The push refers to a repository [localhost:32958/kubevirt/registry-disk-v1alpha] 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 4cd98e29acca: Pushed 9beeb9a18439: Pushed 6709b2da72b8: Pushed devel: digest: sha256:2c0ef5605a6fff492867e1084b6de25ee3d453e92b4e68f49e8194446dd6d654 size: 948 The push refers to a repository [localhost:32958/kubevirt/cirros-registry-disk-demo] 268ad173bba4: Preparing 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 4cd98e29acca: Mounted from kubevirt/registry-disk-v1alpha 9beeb9a18439: Mounted from kubevirt/registry-disk-v1alpha 6709b2da72b8: Mounted from kubevirt/registry-disk-v1alpha 268ad173bba4: Pushed devel: digest: sha256:b21c357e4659a839422429f12591cf7a26f3616c56ade847b60529d503bdc572 size: 1160 The push refers to a repository [localhost:32958/kubevirt/fedora-cloud-registry-disk-demo] 83ada5b87bb3: Preparing 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 9beeb9a18439: Mounted from kubevirt/cirros-registry-disk-demo 6709b2da72b8: Mounted from kubevirt/cirros-registry-disk-demo 4cd98e29acca: Mounted from kubevirt/cirros-registry-disk-demo 83ada5b87bb3: Pushed devel: digest: sha256:80c2fee05c4263c800d83aaf10cf09329e078639e6c86df7a882131e24eeb6e1 size: 1161 The push refers to a repository [localhost:32958/kubevirt/alpine-registry-disk-demo] 6d310ae632fc: Preparing 4cd98e29acca: Preparing 9beeb9a18439: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Mounted from kubevirt/fedora-cloud-registry-disk-demo 9beeb9a18439: Mounted from kubevirt/fedora-cloud-registry-disk-demo 4cd98e29acca: Mounted from kubevirt/fedora-cloud-registry-disk-demo 6d310ae632fc: Pushed devel: digest: sha256:e795269777d17a1d825a936432496031a971afd75da13449393ead1a795fec01 size: 1160 The push refers to a repository [localhost:32958/kubevirt/subresource-access-test] ed7df35068b2: Preparing 2c4f6b64d5e3: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer 2c4f6b64d5e3: Pushed ed7df35068b2: Pushed devel: digest: sha256:5c5592fc8e2188c906cdfe7bda7c37820932cd4ce73bf141f427697aaa27c86c size: 948 The push refers to a repository [localhost:32958/kubevirt/winrmcli] 161ef5381259: Preparing 2bef46eb5bf3: Preparing ac5611d25ed9: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/subresource-access-test 161ef5381259: Pushed ac5611d25ed9: Pushed 2bef46eb5bf3: Pushed devel: digest: sha256:7af91fd07d38bcd9f84388344628ed5e780194920ab8abd1fc6dc30cb708122e size: 1165 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_NUM_NODES=2 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.9.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.9.3-release2 ++ job_prefix=kubevirt-functional-tests-k8s-1.9.3-release2 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.5.1-alpha.3-2-g75c8114 ++ KUBEVIRT_VERSION=v0.5.1-alpha.3-2-g75c8114 + source cluster/k8s-1.9.3/provider.sh ++ set -e ++ image=k8s-1.9.3@sha256:c63e25df491c42f8b473122ed6dd753148b48fc298231d5354b4b1c1c823b8a6 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.9.3.sh ++ source hack/config-provider-k8s-1.9.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubectl +++ docker_prefix=localhost:32958/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vms --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p the server doesn't have a resource type "vms" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ cluster/k8s-1.9.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ wc -l ++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ cluster/k8s-1.9.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_NUM_NODES=2 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.9.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.9.3-release2 ++ job_prefix=kubevirt-functional-tests-k8s-1.9.3-release2 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.5.1-alpha.3-2-g75c8114 ++ KUBEVIRT_VERSION=v0.5.1-alpha.3-2-g75c8114 + source cluster/k8s-1.9.3/provider.sh ++ set -e ++ image=k8s-1.9.3@sha256:c63e25df491c42f8b473122ed6dd753148b48fc298231d5354b4b1c1c823b8a6 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.9.3.sh ++ source hack/config-provider-k8s-1.9.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubectl +++ docker_prefix=localhost:32958/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.9.3-release ]] + [[ k8s-1.9.3-release =~ .*-dev ]] + [[ k8s-1.9.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole "kubevirt.io:admin" created clusterrole "kubevirt.io:edit" created clusterrole "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver-auth-delegator" created rolebinding "kubevirt-apiserver" created role "kubevirt-apiserver" created clusterrole "kubevirt-apiserver" created clusterrole "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding "kubevirt-controller" created clusterrolebinding "kubevirt-controller-cluster-admin" created clusterrolebinding "kubevirt-privileged-cluster-admin" created clusterrole "kubevirt.io:default" created clusterrolebinding "kubevirt.io:default" created service "virt-api" created deployment "virt-api" created deployment "virt-controller" created daemonset "virt-handler" created customresourcedefinition "virtualmachines.kubevirt.io" created customresourcedefinition "virtualmachinereplicasets.kubevirt.io" created customresourcedefinition "virtualmachinepresets.kubevirt.io" created customresourcedefinition "offlinevirtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "iscsi-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "iscsi-disk-custom" created daemonset "iscsi-demo-target-tgtd" created serviceaccount "kubevirt-testing" created clusterrolebinding "kubevirt-testing-cluster-admin" created + [[ k8s-1.9.3 =~ os-3.9.0.* ]] + echo Done Done ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-fd96f94b5-847tz 0/1 ContainerCreating 0 2s virt-api-fd96f94b5-l2ht2 0/1 ContainerCreating 0 2s virt-controller-5f7c946cc4-crqt6 0/1 ContainerCreating 0 2s virt-controller-5f7c946cc4-l6c57 0/1 ContainerCreating 0 2s virt-handler-f7skn 0/1 ContainerCreating 0 2s virt-handler-fq9d9 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running iscsi-demo-target-tgtd-p4rfk 0/1 ContainerCreating 0 1s iscsi-demo-target-tgtd-qlffh 0/1 ContainerCreating 0 1s virt-api-fd96f94b5-847tz 0/1 ContainerCreating 0 3s virt-api-fd96f94b5-l2ht2 0/1 ContainerCreating 0 3s virt-controller-5f7c946cc4-crqt6 0/1 ContainerCreating 0 3s virt-controller-5f7c946cc4-l6c57 0/1 ContainerCreating 0 3s virt-handler-f7skn 0/1 ContainerCreating 0 3s virt-handler-fq9d9 0/1 ContainerCreating 0 3s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-p4rfk 0/1 ContainerCreating 0 12s iscsi-demo-target-tgtd-qlffh 0/1 ContainerCreating 0 12s virt-api-fd96f94b5-847tz 0/1 ContainerCreating 0 14s virt-api-fd96f94b5-l2ht2 0/1 ContainerCreating 0 14s virt-handler-f7skn 0/1 ContainerCreating 0 14s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers iscsi-demo-target-tgtd-p4rfk 0/1 ContainerCreating 0 12s iscsi-demo-target-tgtd-qlffh 0/1 ContainerCreating 0 12s virt-api-fd96f94b5-847tz 0/1 ContainerCreating 0 14s virt-api-fd96f94b5-l2ht2 0/1 ContainerCreating 0 14s virt-handler-f7skn 0/1 ContainerCreating 0 14s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-p4rfk 0/1 ContainerCreating 0 37s iscsi-demo-target-tgtd-qlffh 0/1 ContainerCreating 0 37s virt-api-fd96f94b5-847tz 0/1 ContainerCreating 0 39s virt-api-fd96f94b5-l2ht2 0/1 ContainerCreating 0 39s virt-handler-f7skn 0/1 ContainerCreating 0 39s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running + true + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-p4rfk false iscsi-demo-target-tgtd-qlffh' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-p4rfk false iscsi-demo-target-tgtd-qlffh + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-p4rfk false iscsi-demo-target-tgtd-qlffh' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-p4rfk false iscsi-demo-target-tgtd-qlffh + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ wc -l ++ awk '/virt-controller/ && /true/' + '[' 2 -lt 1 ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd-node01 1/1 Running 0 26m iscsi-demo-target-tgtd-p4rfk 1/1 Running 1 2m iscsi-demo-target-tgtd-qlffh 1/1 Running 1 2m kube-apiserver-node01 1/1 Running 0 27m kube-controller-manager-node01 1/1 Running 0 26m kube-dns-6f4fd4bdf-9m98r 3/3 Running 0 27m kube-flannel-ds-76n8s 1/1 Running 0 27m kube-flannel-ds-txc8m 1/1 Running 0 27m kube-proxy-27djj 1/1 Running 0 27m kube-proxy-8tz6t 1/1 Running 0 27m kube-scheduler-node01 1/1 Running 0 26m virt-api-fd96f94b5-847tz 1/1 Running 1 2m virt-api-fd96f94b5-l2ht2 1/1 Running 0 2m virt-controller-5f7c946cc4-crqt6 1/1 Running 0 2m virt-controller-5f7c946cc4-l6c57 1/1 Running 0 2m virt-handler-f7skn 1/1 Running 0 2m virt-handler-fq9d9 1/1 Running 0 2m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + [[ k8s-1.9.3-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:bfa4d0e4a1a6ecc8067d4e64dfd286bfa9c51c74b3def97ee58a46f3832bc088 go version go1.10 linux/amd64 Waiting for rsyncd to be ready.............................failed rsync: safe_read failed to read 1 bytes [sender]: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(276) [sender=3.1.2] make: *** [functest] Error 12 + make cluster-down ./cluster/down.sh