+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.9.3 + KUBEVIRT_PROVIDER=k8s-1.9.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/06/05 18:57:07 Waiting for host: 192.168.66.101:22 2018/06/05 18:57:10 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/05 18:57:18 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/05 18:57:26 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/06/05 18:57:31 Connected to tcp://192.168.66.101:22 + cat + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.9.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 27.503882 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --discovery-token-ca-cert-hash sha256:f282839868dafa690c6f5d662be9bff7c5bf67b1f20098c89882322e489c2c92 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/06/05 18:58:12 Waiting for host: 192.168.66.102:22 2018/06/05 18:58:15 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/05 18:58:27 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 48668048 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. ++ kubectl get nodes --no-headers ++ grep -v Ready ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 38s v1.9.3 node02 NotReady 13s v1.9.3 + make cluster-sync ./cluster/build.sh Building ... sha256:2df8b30e8f619e28e75e00ea9fa42c63f4f14b1c34fbb1223214102337507863 go version go1.10 linux/amd64 Waiting for rsyncd to be ready.. go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:2df8b30e8f619e28e75e00ea9fa42c63f4f14b1c34fbb1223214102337507863 go version go1.10 linux/amd64 Waiting for rsyncd to be ready go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 36.14 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 45ed71cd684b Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> ba8171a31e93 Step 5/8 : USER 1001 ---> Using cache ---> 6bd535be1fa1 Step 6/8 : COPY virt-controller /virt-controller ---> 7c43e088e8c1 Removing intermediate container b562e8428720 Step 7/8 : ENTRYPOINT /virt-controller ---> Running in a52c4606e329 ---> d8ecb968b934 Removing intermediate container a52c4606e329 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-controller" '' ---> Running in 9afc9e076c81 ---> 6df99fbdb76e Removing intermediate container 9afc9e076c81 Successfully built 6df99fbdb76e Sending build context to Docker daemon 38.08 MB Step 1/14 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/14 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3bbd31ef6597 Step 3/14 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> b24e583fa448 Step 4/14 : COPY sock-connector /sock-connector ---> Using cache ---> 25d0cc0414fc Step 5/14 : COPY sh.sh /sh.sh ---> Using cache ---> e9c9e73584e6 Step 6/14 : COPY virt-launcher /virt-launcher ---> df2f58525d61 Removing intermediate container fceb16fda59f Step 7/14 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 514719e9ec61 Removing intermediate container a83fb612c140 Step 8/14 : RUN chmod 0640 /etc/sudoers.d/kubevirt ---> Running in 03b3aaf314ab  ---> 359e64058d22 Removing intermediate container 03b3aaf314ab Step 9/14 : RUN rm -f /libvirtd.sh ---> Running in 25a5c9e2dfa7  ---> 7899ad34d7dc Removing intermediate container 25a5c9e2dfa7 Step 10/14 : COPY libvirtd.sh /libvirtd.sh ---> 9d6f59e51e8f Removing intermediate container f32894dea4c2 Step 11/14 : RUN chmod a+x /libvirtd.sh ---> Running in 2ad5a7504392  ---> 00edc622ae31 Removing intermediate container 2ad5a7504392 Step 12/14 : COPY entrypoint.sh /entrypoint.sh ---> 4d4a5aa1420f Removing intermediate container 2c818c38c63c Step 13/14 : ENTRYPOINT /entrypoint.sh ---> Running in 891789ef708b ---> 0be57a53ff22 Removing intermediate container 891789ef708b Step 14/14 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-launcher" '' ---> Running in 1b5312bfb902 ---> a858d34c2af1 Removing intermediate container 1b5312bfb902 Successfully built a858d34c2af1 Sending build context to Docker daemon 36.7 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/5 : COPY virt-handler /virt-handler ---> 1e79c25994cc Removing intermediate container 0dd65ce4b51b Step 4/5 : ENTRYPOINT /virt-handler ---> Running in 89ca789e0d81 ---> 92d302de174c Removing intermediate container 89ca789e0d81 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-handler" '' ---> Running in 95442658f63b ---> 7a0aa8ff19a9 Removing intermediate container 95442658f63b Successfully built 7a0aa8ff19a9 Sending build context to Docker daemon 36.86 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 12e3c00eb78f Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> cfb92cbbf126 Step 5/8 : USER 1001 ---> Using cache ---> f02f77c7a4fc Step 6/8 : COPY virt-api /virt-api ---> 3f84706f7a73 Removing intermediate container 9592b7e8377c Step 7/8 : ENTRYPOINT /virt-api ---> Running in 659cf9f36aff ---> 0bd363643b1f Removing intermediate container 659cf9f36aff Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-api" '' ---> Running in 784b21012c75 ---> 399ddf72fef7 Removing intermediate container 784b21012c75 Successfully built 399ddf72fef7 Sending build context to Docker daemon 6.656 kB Step 1/10 : FROM fedora:27 ---> 9110ae7f579f Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/10 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/10 : RUN dnf -y install scsi-target-utils bzip2 e2fsprogs ---> Using cache ---> 77199cda1e0f Step 5/10 : RUN mkdir -p /images ---> Using cache ---> 124576f102e5 Step 6/10 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/1-alpine.img ---> Using cache ---> e63f6cabc6dc Step 7/10 : ADD run-tgt.sh / ---> Using cache ---> fc0337161a34 Step 8/10 : EXPOSE 3260 ---> Using cache ---> 23da0e2e9eb9 Step 9/10 : CMD /run-tgt.sh ---> Using cache ---> c7988963a934 Step 10/10 : LABEL "iscsi-demo-target-tgtd" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Running in 077e91ed3267 ---> c84aaa9cfb31 Removing intermediate container 077e91ed3267 Successfully built c84aaa9cfb31 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/5 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 7b90d68258cd Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "vm-killer" '' ---> Running in e73fee990930 ---> e731a45ba94a Removing intermediate container e73fee990930 Successfully built e731a45ba94a Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 4817bb6590f8 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> b8b166db2544 Step 3/7 : ENV container docker ---> Using cache ---> 8b120f56086f Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 61851ac93c11 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> ada85930060d Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 6f2ffb0e7aed Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "registry-disk-v1alpha" '' ---> Running in cd5e304d2404 ---> efa644296ee3 Removing intermediate container cd5e304d2404 Successfully built efa644296ee3 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33123/kubevirt/registry-disk-v1alpha:devel ---> efa644296ee3 Step 2/4 : MAINTAINER "David Vossel" \ ---> Running in 9aff7adc4ade ---> 5e1a1abaa85f Removing intermediate container 9aff7adc4ade Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Running in 045d4cd58e8d    % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 12.1M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:--  0 30 12.1M 30 3760k 0 0 2320k 0 0:00:05 0:00:01 0:00:04 2320k 90 12.1M 90 10.9M 0 0 4430k 0 0:00:02 0:00:02 --:--:-- 4430k 100 12.1M 100 12.1M 0 0 4537k 0 0:00:02 0:00:02 --:--:-- 4537k  ---> bbcefbaba7fd Removing intermediate container 045d4cd58e8d Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Running in f0c8013eeefe ---> 3ea40bdda93d Removing intermediate container f0c8013eeefe Successfully built 3ea40bdda93d Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33123/kubevirt/registry-disk-v1alpha:devel ---> efa644296ee3 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Running in 0b73c96bc16a ---> 330dfb504750 Removing intermediate container 0b73c96bc16a Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Running in 4343cf6e9b15  % Total % Received % Xferd Average Speed Time Time Time Current  Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0  0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 327 100 327 0 0 368 0 --:--:-- --:--:-- --:--:-- 1739  0 221M 0 1968k 0 0 1127k 0 0:03:21 0:00:01 0:03:20 1127k 2 221M 2 4624k 0 0 1705k 0 0:02:13 0:00:02 0:02:11 2749k 2 221M 2 6640k 0 0 1749k 0 0:02:09 0:00:03 0:02:06 2277k 3 221M 3 7792k 0 0 1657k 0 0:02:16 0:00:04 0:02:12 1970k 3 221M 3 8848k 0 0 1549k 0 0:02:26 0:00:05 0:02:21 1735k 4 221M 4 9.7M 0 0 1489k 0 0:02:32 0:00:06 0:02:26 1617k 4 221M 4 10.8M 0 0 1436k 0 0:02:37  0:00:07 0:02:30 1291k 5 221M 5 11.9M 0 0 1406k 0 0:02:41 0:00:08 0:02:33 1140k 5 221M 5 12.8M 0 0 1353k 0 0:02:47 0:00:09 0:02:38 1069k 6 221M 6 13.6M 0 0 1305k 0 0:02:53 0:00:10 0:02:43 1025k 6 221M 6 14.5M 0 0 1271k 0 0:02:58 0:00:11 0:02:47 978k 6 221M 6 15.4M 0 0 1240k 0 0:03:02 0:00:12 0:02:50 937k 7 221M 7 16.4M 0 0 1228k 0 0:03:04 0:00:13 0:02:51 918k 7 221M 7 17.6M 0 0 1227k 0 0:03:04 0:00:14 0:02:50 981k 8 221M 8 19.0M 0 0 1241k 0 0:03:02 0:00:15 0:02:47 1103k 9 221M 9 20.6M 0 0 1265k 0 0:02:59 0:00:16 0:02:43 1251k 10 221M 10 22.2M 0 0 1287k 0 0:02:56 0:00:17 0:02:39 1407k 10 221M 10 23.7M 0 0 1300k 0 0:02:54 0:00:18 0:02:36 1499k 11 221M 11 24.5M 0 0 1267k 0 0:02:58 0:00:19 0:02:39 1383k 11 221M 11 25.5M 0 0 1253k 0 0:03:01 0:00:20 0:02:41 1290k 11 221M 11 26.2M 0 0 1235k 0 0:03:03 0:00:21 0:02:42 1136k 12 221M 12 26.9M 0 0 1215k 0 0:03:06 0:00:22 0:02:44 960k 12 221M 12 27.7M 0 0 1198k 0 0:03:09 0:00:23 0:02:46 813k 12 221M 12 28.5M 0 0 1183k 0 0:03:11 0:00:24 0:02:47 846k 13 221M 13 29.5M 0 0 1178k 0 0:03:12 0:00:25 0:02:47 856k 13 221M 13 30.7M 0 0 1179k 0 0:03:12 0:00:26 0:02:46 933k 14 221M 14 32.1M 0 0 1188k 0 0:03:10 0:00:27 0:02:43 1065k 15 221M 15 33.7M 0 0 1203k 0 0:03:08 0:00:28 0:02:40 1228k 16 221M 16 35.5M 0 0 1225k 0 0:03:05 0:00:29 0:02:36 1431k 16 221M 16 37.5M 0 0 1250k 0 0:03:01 0:00:30 0:02:31 1622k 17 221M 17 39.6M 0 0 1279k 0 0:02:57 0:00:31 0:02:26 1811k 18 221M 18 41.6M 0 0 1303k 0 0:02:54 0:00:32 0:02:22 1941k 19 221M 19 43.7M 0 0 1330k 0 0:02:50 0:00:33 0:02:17 2063k 20 221M 20 45.7M 0 0 1350k 0 0:02:48 0:00:34 0:02:14 2090k 21 221M 21 47.6M 0 0 1365k 0 0:02:46 0:00:35 0:02:11 2070k 22 221M 22 49.5M 0 0 1381k 0 0:02:44 0:00:36 0:02:08 2032k 23 221M 23 51.5M 0 0 1400k 0 0:02:42 0:00:37 0:02:05 2033k 23 221M 23 52.9M 0 0 1400k 0 0:02:41 0:00:38 0:02:03 1870k 24 221M 24 54.3M 0 0 1402k 0 0:02:41 0:00:39 0:02:02 1769k 25 221M 25 55.6M 0 0 1400k 0 0:02:42 0:00:40 0:02:02 1645k 25 221M 25 57.0M 0 0 1400k 0 0:02:41 0:00:41 0:02:00 1538k 26 221M 26 58.6M 0 0 1406k 0 0:02:41 0:00:42 0:01:59 1454k 27 221M 27 60.2M 0 0 1412k 0 0:02:40 0:00:43 0:01:57 1500k 27 221M 27 61.7M 0 0 1413k 0 0:02:40 0:00:44 0:01:56 1497k 28 221M 28 63.4M 0 0 1419k 0 0:02:39 0:00:45 0:01:54 1579k 29 221M 29 65.1M 0 0 1427k 0 0:02:38 0:00:46 0:01:52 1657k 30 221M 30 66.9M 0 0 1437k 0 0:02:37 0:00:47 0:01:50 1697k 31 221M 31 68.9M 0 0 1449k 0 0:02:36 0:00:48 0:01:48 1770k 31 221M 31 70.7M 0 0 1457k 0 0:02:35 0:00:49 0:01:46 1847k 32 221M 32 72.0M 0 0 1455k 0 0:02:35 0:00:50 0:01:45 1788k 33 221M 33 73.2M 0 0 1451k 0 0:02:36 0:00:51 0:01:45 1665k 33 221M 33 74.6M 0 0 1449k 0 0:02:36 0:00:52 0:01:44 1568k 34 221M 34 76.1M 0 0 1451k 0 0:02:36 0:00:53 0:01:43 1479k 35 221M 35 77.9M 0 0 1457k 0 0:02:35 0:00:54 0:01:41 1467k 35 221M 35 79.4M 0 0 1460k 0 0:02:35 0:00:55 0:01:40 1506k 36 221M 36 80.5M 0 0 1455k 0 0:02:35 0:00:56 0:01:39 1501k 36 221M 36 81.5M 0 0 1447k 0 0:02:36 0:00:57 0:01:39 1428k 37 221M 37 82.6M 0 0 1439k 0 0:02:37 0:00:58 0:01:39 1312k 37 221M 37 83.2M 0 0 1427k 0 0:02:38 0:00:59 0:01:39 1092k 37 221M 37 83.8M 0 0 1413k 0 0:02:40 0:01:00 0:01:40 893k 38 221M 38 84.3M 0 0 1399k 0 0:02:42 0:01:01 0:01:41 766k 38 221M 38 84.8M 0 0 1385k 0 0:02:43 0:01:02 0:01:41 667k 38 221M 38 85.2M 0 0 1369k 0 0:02:45 0:01:03 0:01:42 535k 38 221M 38 85.5M 0 0 1354k 0 0:02:47 0:01:04 0:01:43 481k 38 221M 38 86.1M 0 0 1340k 0 0:02:49 0:01:05 0:01:44 461k 39 221M 39 86.5M 0 0 1327k 0 0:02:50 0:01:06 0:01:44 442k 39 221M 39 87.0M 0 0 1316k 0 0:02:52 0:01:07 0:01:45 457k 39 221M 39 87.7M 0 0 1307k 0 0:02:53 0:01:08 0:01:45 525k 39 221M 39 88.5M 0 0 1300k 0 0:02:54 0:01:09 0:01:45 598k 40 221M 40 89.3M 0 0 1293k 0 0:02:55 0:01:10 0:01:45 670k 40 221M 40 90.4M 0 0 1290k 0 0:02:55 0:01:11 0:01:44 798k 41 221M 41 91.6M 0 0 1290k 0 0:02:55 0:01:12 0:01:43 937k 41 221M 41 92.5M 0 0 1285k 0 0:02:56 0:01:13 0:01:43 974k 42 221M 42 93.4M 0 0 1281k 0 0:02:57 0:01:14 0:01:43 1017k 42 221M 42 94.6M 0 0 1279k 0 0:02:57 0:01:15 0:01:42 1079k 43 221M 43 95.9M 0 0 1280k 0 0:02:57 0:01:16 0:01:41 1132k 43 221M 43 97.4M 0 0 1283k 0 0:02:56 0:01:17 0:01:39 1184k 44 221M 44 98.6M 0 0 1283k 0 0:02:56 0:01:18 0:01:38 1262k 45 221M 45 100M 0 0 1287k 0 0:02:56 0:01:19 0:01:37 1372k 45 221M 45 101M 0 0 1290k 0 0:02:55 0:01:20 0:01:35 1458k 46 221M 46 103M 0 0 1296k 0 0:02:54 0:01:21 0:01:33 1549k 47 221M 47 105M 0 0 1305k 0 0:02:53 0:01:22 0:01:31 1643k 48 221M 48 107M 0 0 1316k 0 0:02:52 0:01:23 0:01:29 1834k 49 221M 49 109M 0 0 1326k 0 0:02:50 0:01:24 0:01:26 1963k 49 221M 49 110M 0 0 1320k 0 0:02:51 0:01:25 0:01:26 1800k 50 221M 50 110M 0 0 1310k 0 0:02:53 0:01:26 0:01:27 1531k 50 221M 50 111M 0 0 1302k 0 0:02:54 0:01:27 0:01:27 1250k 50 221M 50 112M 0 0 1292k 0 0:02:55 0:01:28 0:01:27 887k 50 221M 50 112M 0 0 1283k 0 0:02:56 0:01:29 0:01:27 543k 50 221M 50 112M 0 0 1275k 0 0:02:57 0:01:30 0:01:27 502k 51 221M 51 113M 0 0 1269k 0 0:02:58 0:01:31 0:01:27 559k 51 221M 51 114M 0 0 1266k 0 0:02:59 0:01:32 0:01:27  625k 52 221M 52 115M 0 0 1265k 0 0:02:59 0:01:33 0:01:26 774k 52 221M 52 117M 0 0 1265k 0 0:02:59 0:01:34 0:01:25 954k 53 221M 53 118M 0 0 1268k 0 0:02:58 0:01:35 0:01:23 1139k 54 221M 54 120M 0 0 1273k 0 0:02:58 0:01:36 0:01:22 1338k 55 221M 55 122M 0 0 1284k 0 0:02:56 0:01:37 0:01:19 1611k 56 221M 56 125M 0 0 1303k 0 0:02:54 0:01:38 0:01:16 2019k 57 221M 57 127M 0 0 1313k 0 0:02:52 0:01:39 0:01:13 2208k 58 221M 58 130M 0 0 1323k 0 0:02:51 0:01:40 0:01:11 2375k 59 221M 59 132M 0 0 1330k 0 0:02:50 0:01:41 0:01:09 2438k 60 221M 60 134M 0 0 1338k 0 0:02:49 0:01:42 0:01:07 2405k 61 221M 61 136M 0 0 1343k 0 0:02:48 0:01:43 0:01:05 2139k 62 221M 62 137M 0 0 1348k 0 0:02:48 0:01:44 0:01:04 2039k 62 221M 62 139M 0 0 1350k 0 0:02:48 0:01:45 0:01:03 1897k 63 221M 63 141M 0 0 1353k 0 0:02:47 0:01:46 0:01:01 1817k 64 221M 64 142M 0 0 1357k 0 0:02:47 0:01:47 0:01:00 1753k 65 221M 65 144M 0 0 1364k 0 0:02:46 0:01:48 0:00:58 1791k 66 221M 66 146M 0 0 1371k 0 0:02:45 0:01:49 0:00:56 1872k 67 221M 67 148M 0 0 1377k 0 0:02:44 0:01:50 0:00:54 1948k 67 221M 67 150M 0 0 1380k 0 0:02:44 0:01:51 0:00:53 1962k 68 221M 68 152M 0 0 1383k 0 0:02:44 0:01:52 0:00:52 1928k 69 221M 69 154M 0 0 1387k 0 0:02:43 0:01:53 0:00:50 1890k 70 221M 70 156M 0 0 1393k 0 0:02:42 0:01:54 0:00:48 1864k 71 221M 71 158M 0 0 1400k 0 0:02:41 0:01:55 0:00:46 1916k 72 221M 72 160M 0 0 1409k 0 0:02:40 0:01:56 0:00:44 2063k 73 221M 73 162M 0 0 1417k  0 0:02:40 0:01:57 0:00:43 2181k 74 221M 74 164M 0 0 1417k 0 0:02:40 0:01:58 0:00:42 2094k 74 221M 74 165M 0 0 1416k 0 0:02:40 0:01:59 0:00:41 1938k 75 221M 75 167M 0 0 1416k 0 0:02:40 0:02:00 0:00:40 1790k 76 221M 76 168M 0 0 1419k 0 0:02:39 0:02:01 0:00:38 1641k 76 221M 76 170M 0 0 1423k 0 0:02:39 0:02:02 0:00:37 1574k 77 221M 77 172M 0 0 1428k 0 0:02:38 0:02:03 0:00:35 1705k 78 221M 78 174M 0 0 1431k 0 0:02:38 0:02:04 0:00:34 1797k 79 221M 79 176M 0 0 1435k 0 0:02:37 0:02:05  0:00:32 1896k 80 221M 80 178M 0 0 1440k 0 0:02:37 0:02:06 0:00:31 1947k 81 221M 81 179M 0 0 1442k 0 0:02:37 0:02:07 0:00:30 1904k 82 221M 82 181M 0 0 1445k 0 0:02:36 0:02:08 0:00:28 1865k 82 221M 82 183M 0 0 1450k 0 0:02:36 0:02:09 0:00:27 1935k 83 221M 83 185M 0 0 1457k 0 0:02:35 0:02:10 0:00:25 1986k 85 221M 85 188M 0 0 1464k 0 0:02:34 0:02:11 0:00:23 2091k 86 221M 86 190M 0 0 1470k 0 0:02:34 0:02:12 0:00:22 2189k 86 221M 86 192M 0 0 1471k 0 0:02:34 0:02:13 0:00:21 2136k 87 221M 87 193M 0 0 1470k 0 0:02:34 0:02:14 0:00:20 1980k 87 221M 87 194M 0 0 1470k 0 0:02:34 0:02:15 0:00:19 1832k 88 221M 88 196M 0 0 1472k 0 0:02:34 0:02:16 0:00:18 1670k 89 221M 89 198M 0 0 1475k 0 0:02:33 0:02:17 0:00:16 1603k 90 221M 90 200M 0 0 1479k 0 0:02:33 0:02:18 0:00:15 1696k 91 221M 91 202M 0 0 1484k 0 0:02:32 0:02:19 0:00:13 1862k 92 221M 92 204M 0 0 1487k 0 0:02:32 0:02:20 0:00:12 1932k 93 221M 93 206M 0 0 1489k 0 0:02:32 0:02:21 0:00:11 1958k 93 221M 93 207M 0 0 1489k 0 0:02:32 0:02:22 0:00:10 1884k  94 221M 94 208M 0 0 1484k 0 0:02:32 0:02:23 0:00:09 1608k 94 221M 94 208M 0 0 1478k 0 0:02:33 0:02:24 0:00:09 1310k 94 221M 94 209M 0 0 1472k 0 0:02:34 0:02:25 0:00:09 1061k 94 221M 94 210M 0 0 1468k 0 0:02:34 0:02:26 0:00:08 873k 95 221M 95  211M 0 0 1465k 0 0:02:34 0:02:27 0:00:07 766k 95 221M 95 212M 0 0 1463k 0 0:02:35 0:02:28 0:00:07 869k 96 221M 96 213M 0 0 1463k 0 0:02:35 0:02:29 0:00:06 1030k 97 221M 97 215M 0 0 1464k 0 0:02:34 0:02:30 0:00:04 1230k 98 221M 98 217M 0 0 1467k 0 0:02:34 0:02:31 0:00:03 1428k 99  221M  99 219M  0    0 1472k    0 0:02:34  0:02:32  0:00:02 1677k 100 221M 100 221M 0 0 1478k 0 0:02:33 0:02:33 --:--:-- 1965k  ---> b812afee4476 Removing intermediate container 4343cf6e9b15 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Running in 07f459b91fe7 ---> d551f51eae30 Removing intermediate container 07f459b91fe7 Successfully built d551f51eae30 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33123/kubevirt/registry-disk-v1alpha:devel ---> efa644296ee3 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 330dfb504750 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Running in e9d54c16b2ec   % Total % Received % Xferd Average Speed Time  Time Time Current  Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 37.0M 0 15048 0 0 62962 0 0:10:16 --:--:-- 0:10:16 62700 8 37.0M 8 3387k 0 0 2849k 0 0:00:13 0:00:01 0:00:12 2847k 31 37.0M 31 11.6M 0 0 5463k 0 0:00:06 0:00:02 0:00:04 5460k 83 37.0M 83 30.9M 0 0 9951k 0 0:00:03 0:00:03 --:--:-- 9948k 100 37.0M 100 37.0M 0 0 10.9M 0 0:00:03 0:00:03 --:--:-- 10.9M  ---> 34778c9e51ef Removing intermediate container e9d54c16b2ec Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Running in 25e848bdc8d7 ---> 21d43b8e87f3 Removing intermediate container 25e848bdc8d7 Successfully built 21d43b8e87f3 Sending build context to Docker daemon 33.97 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 62cf8151a5f3 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 7df4da9e1b5d Step 5/8 : USER 1001 ---> Using cache ---> 3ee421ac4ad4 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> ca5dd427ae0b Removing intermediate container f43ddc99fbce Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 3e1bc409a068 ---> eadb75caca61 Removing intermediate container 3e1bc409a068 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "subresource-access-test" '' ---> Running in 420bd02c2226 ---> 2f6c3bbe0cd7 Removing intermediate container 420bd02c2226 Successfully built 2f6c3bbe0cd7 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/9 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 7ff1a45e3635 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> a05ebaed4a0f Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> cd8398be9593 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 71c7ecd55e24 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 9689e3184427 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "winrmcli" '' ---> Running in 94bf7fe14ab5 ---> 02ffa51e3095 Removing intermediate container 94bf7fe14ab5 Successfully built 02ffa51e3095 hack/build-docker.sh push The push refers to a repository [localhost:33123/kubevirt/virt-controller] 5df3df5d3bcc: Preparing c0d2c4546d78: Preparing 39bae602f753: Preparing c0d2c4546d78: Pushed 5df3df5d3bcc: Pushed 39bae602f753: Pushed devel: digest: sha256:a57c9d59c2fd2fbf41dec545ddd7162b780232c90051a01220747d0a7bcd9d5f size: 948 The push refers to a repository [localhost:33123/kubevirt/virt-launcher] 0f3ec2e66bed: Preparing d9d2e879a318: Preparing d9d2e879a318: Preparing 597f52939525: Preparing 2ac6407a4dd9: Preparing a1863477405a: Preparing 81b483b9c419: Preparing 5e5a394712de: Preparing dcea01d1f799: Preparing 6a9c8a62fecd: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing a1359dc556dd: Preparing 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing 81b483b9c419: Waiting dcea01d1f799: Waiting 5e5a394712de: Waiting 6a9c8a62fecd: Waiting 530cc55618cd: Waiting 34fa414dfdf6: Waiting 4b440db36f72: Waiting a1359dc556dd: Waiting 39bae602f753: Waiting 490c7c373332: Waiting d9d2e879a318: Pushed 2ac6407a4dd9: Pushed 597f52939525: Pushed 0f3ec2e66bed: Pushed a1863477405a: Pushed 5e5a394712de: Pushed 530cc55618cd: Pushed a1359dc556dd: Pushed 34fa414dfdf6: Pushed dcea01d1f799: Pushed 39bae602f753: Mounted from kubevirt/virt-controller 490c7c373332: Pushed 81b483b9c419: Pushed 6a9c8a62fecd: Pushed 4b440db36f72: Pushed devel: digest: sha256:056385edae61f257a593af8b827f70f0210ccac2fef23ef799468bdbc1b10aa0 size: 3653 The push refers to a repository [localhost:33123/kubevirt/virt-handler] 2e7357f8aff0: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher 2e7357f8aff0: Pushed devel: digest: sha256:a302ab9b2a4df39b025b273f4da799a8a6de8b573de4607f876accbe71ce38e7 size: 740 The push refers to a repository [localhost:33123/kubevirt/virt-api] 9a4322b95982: Preparing ae4970287372: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler ae4970287372: Pushed 9a4322b95982: Pushed devel: digest: sha256:5047f4b35618efbd82ec7d41d4db3f5051f5c907dea0acade376dd3729235da6 size: 948 The push refers to a repository [localhost:33123/kubevirt/iscsi-demo-target-tgtd] f9be666e6960: Preparing 3aff9cc5a3f0: Preparing 5d7022918814: Preparing 172fa0952bf3: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api f9be666e6960: Pushed 5d7022918814: Pushed 3aff9cc5a3f0: Pushed 172fa0952bf3: Pushed devel: digest: sha256:ff953f9e5bc8adde25d8d53cc9237fbda4b10fd471a5f67922013eb4146f2e92 size: 1368 The push refers to a repository [localhost:33123/kubevirt/vm-killer] e3afff5758ce: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/iscsi-demo-target-tgtd e3afff5758ce: Pushed devel: digest: sha256:0fb636ee5349f1d1e7c47b3efb006a35adc082f644f11f689ebd1fde0cc95071 size: 740 The push refers to a repository [localhost:33123/kubevirt/registry-disk-v1alpha] 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 376d512574a4: Pushed 7971c2f81ae9: Pushed e7752b410e4c: Pushed devel: digest: sha256:ef40e1d7c64d8fbf1d851c949124decd700b718b6dfcd8f8a84abbd0e9b4a619 size: 948 The push refers to a repository [localhost:33123/kubevirt/cirros-registry-disk-demo] b32252f12c76: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 7971c2f81ae9: Mounted from kubevirt/registry-disk-v1alpha e7752b410e4c: Mounted from kubevirt/registry-disk-v1alpha 376d512574a4: Mounted from kubevirt/registry-disk-v1alpha b32252f12c76: Pushed devel: digest: sha256:cc97dea83d7287cfcc53d9430c359e4ffdb7b30c3dfb01d3486a2739ece5462a size: 1160 The push refers to a repository [localhost:33123/kubevirt/fedora-cloud-registry-disk-demo] 071623453e45: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 7971c2f81ae9: Mounted from kubevirt/cirros-registry-disk-demo e7752b410e4c: Mounted from kubevirt/cirros-registry-disk-demo 376d512574a4: Mounted from kubevirt/cirros-registry-disk-demo 071623453e45: Pushed devel: digest: sha256:54ab8edd6fba1c0e7fe04cca651614af5440e9fc3afc65aceca8b603897e8a1c size: 1161 The push refers to a repository [localhost:33123/kubevirt/alpine-registry-disk-demo] 574ec1e21826: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 376d512574a4: Mounted from kubevirt/fedora-cloud-registry-disk-demo e7752b410e4c: Mounted from kubevirt/fedora-cloud-registry-disk-demo 7971c2f81ae9: Mounted from kubevirt/fedora-cloud-registry-disk-demo 574ec1e21826: Pushed devel: digest: sha256:83a7c71b87be6f066bf59b606f40f1f2c707845e707499ab6f0111cd4b3a3ab3 size: 1160 The push refers to a repository [localhost:33123/kubevirt/subresource-access-test] 5c998e924555: Preparing 2aaca144a3e2: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer 2aaca144a3e2: Pushed 5c998e924555: Pushed devel: digest: sha256:5b3105e233d5f33f1bb9e2f88c3f5b0509b3dba830474d03de3c00e6fac56e05 size: 948 The push refers to a repository [localhost:33123/kubevirt/winrmcli] 3cd438b33e81: Preparing 8519683f2557: Preparing a29ba32ac0a1: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/subresource-access-test 3cd438b33e81: Pushed a29ba32ac0a1: Pushed 8519683f2557: Pushed devel: digest: sha256:23faa287700a505212945b88845384495a652f91d12ac96d659374f37b014f52 size: 1165 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_NUM_NODES=2 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.5.1-alpha.2-78-g37ad186 ++ KUBEVIRT_VERSION=v0.5.1-alpha.2-78-g37ad186 + source cluster/k8s-1.9.3/provider.sh ++ set -e ++ image=k8s-1.9.3@sha256:265ccfeeb0352a87141d4f0f041fa8cc6409b82fe3456622f4c549ec1bfe65c0 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.9.3.sh ++ source hack/config-provider-k8s-1.9.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubectl +++ docker_prefix=localhost:33123/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vms --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p the server doesn't have a resource type "vms" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ wc -l ++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ cluster/k8s-1.9.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ cluster/k8s-1.9.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_NUM_NODES=2 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.5.1-alpha.2-78-g37ad186 ++ KUBEVIRT_VERSION=v0.5.1-alpha.2-78-g37ad186 + source cluster/k8s-1.9.3/provider.sh ++ set -e ++ image=k8s-1.9.3@sha256:265ccfeeb0352a87141d4f0f041fa8cc6409b82fe3456622f4c549ec1bfe65c0 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ KUBEVIRT_PROVIDER=k8s-1.9.3 ++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.9.3.sh ++ source hack/config-provider-k8s-1.9.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubectl +++ docker_prefix=localhost:33123/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-release ]] + [[ k8s-1.10.3-release =~ .*-dev ]] + [[ k8s-1.10.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole "kubevirt.io:admin" created clusterrole "kubevirt.io:edit" created clusterrole "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver-auth-delegator" created rolebinding "kubevirt-apiserver" created role "kubevirt-apiserver" created clusterrole "kubevirt-apiserver" created clusterrole "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding "kubevirt-controller" created clusterrolebinding "kubevirt-controller-cluster-admin" created clusterrolebinding "kubevirt-privileged-cluster-admin" created clusterrole "kubevirt.io:default" created clusterrolebinding "kubevirt.io:default" created service "virt-api" created deployment "virt-api" created deployment "virt-controller" created daemonset "virt-handler" created customresourcedefinition "virtualmachines.kubevirt.io" created customresourcedefinition "virtualmachinereplicasets.kubevirt.io" created customresourcedefinition "virtualmachinepresets.kubevirt.io" created customresourcedefinition "offlinevirtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "iscsi-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "iscsi-disk-custom" created daemonset "iscsi-demo-target-tgtd" created serviceaccount "kubevirt-testing" created clusterrolebinding "kubevirt-testing-cluster-admin" created + '[' k8s-1.9.3 = vagrant-openshift ']' + [[ k8s-1.9.3 =~ os-3.9.0.* ]] + echo Done Done ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-fd96f94b5-5c67d 0/1 ContainerCreating 0 3s virt-api-fd96f94b5-nvjcz 0/1 ContainerCreating 0 3s virt-controller-5f7c946cc4-rmqhn 0/1 ContainerCreating 0 3s virt-controller-5f7c946cc4-s95f6 0/1 ContainerCreating 0 3s virt-handler-n8k97 0/1 ContainerCreating 0 2s virt-handler-v9fj4 0/1 ContainerCreating 0 3s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers virt-api-fd96f94b5-5c67d 0/1 ContainerCreating 0 3s virt-api-fd96f94b5-nvjcz 0/1 ContainerCreating 0 3s virt-controller-5f7c946cc4-rmqhn 0/1 ContainerCreating 0 3s virt-controller-5f7c946cc4-s95f6 0/1 ContainerCreating 0 3s virt-handler-n8k97 0/1 ContainerCreating 0 2s virt-handler-v9fj4 0/1 ContainerCreating 0 3s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'iscsi-demo-target-tgtd-57r66 0/1 ContainerCreating 0 17s iscsi-demo-target-tgtd-fhcps 0/1 ContainerCreating 0 17s virt-api-fd96f94b5-5c67d 0/1 ContainerCreating 0 19s virt-api-fd96f94b5-nvjcz 0/1 ContainerCreating 0 19s virt-handler-n8k97 0/1 ContainerCreating 0 18s virt-handler-v9fj4 0/1 ContainerCreating 0 19s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + grep -v Running + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers iscsi-demo-target-tgtd-57r66 0/1 ContainerCreating 0 19s iscsi-demo-target-tgtd-fhcps 0/1 ContainerCreating 0 19s virt-api-fd96f94b5-5c67d 0/1 ContainerCreating 0 21s virt-api-fd96f94b5-nvjcz 0/1 ContainerCreating 0 21s virt-handler-n8k97 0/1 ContainerCreating 0 20s virt-handler-v9fj4 0/1 ContainerCreating 0 21s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n 'iscsi-demo-target-tgtd-57r66 0/1 ContainerCreating 0 31s iscsi-demo-target-tgtd-fhcps 0/1 ContainerCreating 0 31s virt-api-fd96f94b5-5c67d 0/1 ContainerCreating 0 33s virt-api-fd96f94b5-nvjcz 0/1 ContainerCreating 0 33s virt-handler-n8k97 0/1 ContainerCreating 0 32s virt-handler-v9fj4 0/1 ContainerCreating 0 33s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers iscsi-demo-target-tgtd-57r66 0/1 ContainerCreating 0 32s iscsi-demo-target-tgtd-fhcps 0/1 ContainerCreating 0 32s virt-api-fd96f94b5-5c67d 0/1 ContainerCreating 0 34s virt-api-fd96f94b5-nvjcz 0/1 ContainerCreating 0 34s virt-handler-n8k97 0/1 ContainerCreating 0 33s virt-handler-v9fj4 0/1 ContainerCreating 0 34s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' + '[' -n 'false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps false virt-api-fd96f94b5-5c67d false virt-api-fd96f94b5-nvjcz' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps false virt-api-fd96f94b5-5c67d false virt-api-fd96f94b5-nvjcz + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps false virt-api-fd96f94b5-nvjcz' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps false virt-api-fd96f94b5-nvjcz + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' + '[' -n 'false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps false kube-scheduler-node01 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps false kube-controller-manager-node01 false kube-scheduler-node01' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps false kube-controller-manager-node01 false kube-scheduler-node01 + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-57r66 false iscsi-demo-target-tgtd-fhcps' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods) + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods) + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '/virt-controller/ && /true/' ++ wc -l + '[' 2 -lt 1 ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get pods) + make cluster-down ./cluster/down.sh