+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release + [[ k8s-1.9.3-release =~ openshift-.* ]] + [[ k8s-1.9.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.9.3 + KUBEVIRT_PROVIDER=k8s-1.9.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/06/06 14:43:53 Waiting for host: 192.168.66.101:22 2018/06/06 14:43:56 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/06 14:44:04 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/06 14:44:12 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/06 14:44:25 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.9.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 21.004014 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --discovery-token-ca-cert-hash sha256:cab75e2d1b3073d813f0f9962c42a8c572b59236a74c45e24cb75d86900c0078 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/06/06 14:45:12 Waiting for host: 192.168.66.102:22 2018/06/06 14:45:15 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/06 14:45:23 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/06 14:45:31 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/06 14:45:36 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 48668048 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep -v Ready + '[' -n '' ']' + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 1m v1.9.3 node02 Ready 14s v1.9.3 + make cluster-sync ./cluster/build.sh Building ... sha256:2df8b30e8f619e28e75e00ea9fa42c63f4f14b1c34fbb1223214102337507863 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:2df8b30e8f619e28e75e00ea9fa42c63f4f14b1c34fbb1223214102337507863 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 36.15 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 45ed71cd684b Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> ba8171a31e93 Step 5/8 : USER 1001 ---> Using cache ---> 6bd535be1fa1 Step 6/8 : COPY virt-controller /virt-controller ---> Using cache ---> 4d356af21e5f Step 7/8 : ENTRYPOINT /virt-controller ---> Using cache ---> fb3280b8038c Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release1" '' "virt-controller" '' ---> Running in c5fd838b5e41 ---> ebf95390b0c2 Removing intermediate container c5fd838b5e41 Successfully built ebf95390b0c2 Sending build context to Docker daemon 38.08 MB Step 1/14 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/14 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 3bbd31ef6597 Step 3/14 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> b24e583fa448 Step 4/14 : COPY sock-connector /sock-connector ---> Using cache ---> 25d0cc0414fc Step 5/14 : COPY sh.sh /sh.sh ---> Using cache ---> e9c9e73584e6 Step 6/14 : COPY virt-launcher /virt-launcher ---> Using cache ---> 5502379f5788 Step 7/14 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> Using cache ---> e643fd04d00d Step 8/14 : RUN chmod 0640 /etc/sudoers.d/kubevirt ---> Using cache ---> dc0a5d1b213b Step 9/14 : RUN rm -f /libvirtd.sh ---> Using cache ---> 60cbe3fc87a7 Step 10/14 : COPY libvirtd.sh /libvirtd.sh ---> Using cache ---> ad836abb63c5 Step 11/14 : RUN chmod a+x /libvirtd.sh ---> Using cache ---> 5d714cbeb235 Step 12/14 : COPY entrypoint.sh /entrypoint.sh ---> Using cache ---> 7bde38b906e2 Step 13/14 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> 73bb7fc34445 Step 14/14 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release1" '' "virt-launcher" '' ---> Running in cab3e31ddca9 ---> 0a48ed9c8c27 Removing intermediate container cab3e31ddca9 Successfully built 0a48ed9c8c27 Sending build context to Docker daemon 36.7 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/5 : COPY virt-handler /virt-handler ---> Using cache ---> 4fcf0ea30a9d Step 4/5 : ENTRYPOINT /virt-handler ---> Using cache ---> ff2a200e74c9 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release1" '' "virt-handler" '' ---> Running in 6e66964c3ad8 ---> 2a3d3d3d2070 Removing intermediate container 6e66964c3ad8 Successfully built 2a3d3d3d2070 Sending build context to Docker daemon 36.87 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 12e3c00eb78f Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> cfb92cbbf126 Step 5/8 : USER 1001 ---> Using cache ---> f02f77c7a4fc Step 6/8 : COPY virt-api /virt-api ---> Using cache ---> 60c7b3a20093 Step 7/8 : ENTRYPOINT /virt-api ---> Using cache ---> 26865a303d81 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release1" '' "virt-api" '' ---> Running in ca0e39d26043 ---> 057df829450b Removing intermediate container ca0e39d26043 Successfully built 057df829450b Sending build context to Docker daemon 6.656 kB Step 1/10 : FROM fedora:27 ---> 9110ae7f579f Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/10 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/10 : RUN dnf -y install scsi-target-utils bzip2 e2fsprogs ---> Using cache ---> 77199cda1e0f Step 5/10 : RUN mkdir -p /images ---> Using cache ---> 124576f102e5 Step 6/10 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/1-alpine.img ---> Using cache ---> e63f6cabc6dc Step 7/10 : ADD run-tgt.sh / ---> Using cache ---> fc0337161a34 Step 8/10 : EXPOSE 3260 ---> Using cache ---> 23da0e2e9eb9 Step 9/10 : CMD /run-tgt.sh ---> Using cache ---> c7988963a934 Step 10/10 : LABEL "iscsi-demo-target-tgtd" '' "kubevirt-functional-tests-k8s-1.9.3-release1" '' ---> Using cache ---> f2d0357cb364 Successfully built f2d0357cb364 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/5 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 7b90d68258cd Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release1" '' "vm-killer" '' ---> Using cache ---> 099273f0f9bb Successfully built 099273f0f9bb Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 4817bb6590f8 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> b8b166db2544 Step 3/7 : ENV container docker ---> Using cache ---> 8b120f56086f Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 61851ac93c11 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> ada85930060d Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 6f2ffb0e7aed Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release1" '' "registry-disk-v1alpha" '' ---> Using cache ---> 7bcd9beb4f90 Successfully built 7bcd9beb4f90 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32843/kubevirt/registry-disk-v1alpha:devel ---> 7bcd9beb4f90 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 191fcdfd4ec3 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 45856a2c92bf Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.9.3-release1" '' ---> Using cache ---> c30b1faf204c Successfully built c30b1faf204c Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32843/kubevirt/registry-disk-v1alpha:devel ---> 7bcd9beb4f90 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 24b19fa1e102 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 6ee8ac6f60a0 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.9.3-release1" '' ---> Using cache ---> 7296bbdbfe8b Successfully built 7296bbdbfe8b Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:32843/kubevirt/registry-disk-v1alpha:devel ---> 7bcd9beb4f90 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 24b19fa1e102 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 11af2a57595c Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.9.3-release1" '' ---> Using cache ---> aa30cbe84b68 Successfully built aa30cbe84b68 Sending build context to Docker daemon 33.97 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 62cf8151a5f3 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 7df4da9e1b5d Step 5/8 : USER 1001 ---> Using cache ---> 3ee421ac4ad4 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> Using cache ---> e9a6d6f3d00a Step 7/8 : ENTRYPOINT /subresource-access-test ---> Using cache ---> d015b20b2eca Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release1" '' "subresource-access-test" '' ---> Running in b1fce19c9669 ---> e1b0b17e7ad5 Removing intermediate container b1fce19c9669 Successfully built e1b0b17e7ad5 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/9 : ENV container docker ---> Using cache ---> 1211fd5eb075 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 7ff1a45e3635 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> a05ebaed4a0f Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> cd8398be9593 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 71c7ecd55e24 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 9689e3184427 Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.9.3-release1" '' "winrmcli" '' ---> Using cache ---> 7ef75f63a644 Successfully built 7ef75f63a644 hack/build-docker.sh push The push refers to a repository [localhost:32843/kubevirt/virt-controller] 38558087c1f1: Preparing c0d2c4546d78: Preparing 39bae602f753: Preparing c0d2c4546d78: Pushed 38558087c1f1: Pushed 39bae602f753: Pushed devel: digest: sha256:4ca36379c568f50f35e45f84700f4842edfb61738bf39c8bd0952534757f030a size: 948 The push refers to a repository [localhost:32843/kubevirt/virt-launcher] f5db20043a8e: Preparing 023e839110c9: Preparing 023e839110c9: Preparing 9c4a31a57326: Preparing e17dc82850fc: Preparing 43f4441889e8: Preparing 9a88481bb54e: Preparing 5e5a394712de: Preparing dcea01d1f799: Preparing 6a9c8a62fecd: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing a1359dc556dd: Preparing 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing 9a88481bb54e: Waiting 5e5a394712de: Waiting dcea01d1f799: Waiting 6a9c8a62fecd: Waiting 530cc55618cd: Waiting 34fa414dfdf6: Waiting a1359dc556dd: Waiting 490c7c373332: Waiting 4b440db36f72: Waiting 39bae602f753: Waiting f5db20043a8e: Pushed 023e839110c9: Pushed 9c4a31a57326: Pushed 43f4441889e8: Pushed e17dc82850fc: Pushed 5e5a394712de: Pushed dcea01d1f799: Pushed 530cc55618cd: Pushed 34fa414dfdf6: Pushed 490c7c373332: Pushed a1359dc556dd: Pushed 39bae602f753: Mounted from kubevirt/virt-controller 6a9c8a62fecd: Pushed 9a88481bb54e: Pushed 4b440db36f72: Pushed devel: digest: sha256:d91001929cfe78fdf0643763b290097b77027a849cbb8bb758b79a7525180080 size: 3653 The push refers to a repository [localhost:32843/kubevirt/virt-handler] f7975ca5d223: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher f7975ca5d223: Pushed devel: digest: sha256:5add6891b9ff5e0e3bdb44311cca39d38df6293671a85153c40744570d3cb974 size: 740 The push refers to a repository [localhost:32843/kubevirt/virt-api] 219c3f8c6de8: Preparing ae4970287372: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler ae4970287372: Pushed 219c3f8c6de8: Pushed devel: digest: sha256:dec18a1c4994de041aea5c4005b32df30fd8f4a8a771fa11f8f8b27863c3cf25 size: 948 The push refers to a repository [localhost:32843/kubevirt/iscsi-demo-target-tgtd] f9be666e6960: Preparing 3aff9cc5a3f0: Preparing 5d7022918814: Preparing 172fa0952bf3: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api f9be666e6960: Pushed 5d7022918814: Pushed 3aff9cc5a3f0: Pushed 172fa0952bf3: Pushed devel: digest: sha256:33291486fafe3fd8ab6e8b25ba7120c339750427f94d7bd5d877991c0059b3c8 size: 1368 The push refers to a repository [localhost:32843/kubevirt/vm-killer] e3afff5758ce: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/iscsi-demo-target-tgtd e3afff5758ce: Pushed devel: digest: sha256:3e4fc2133dfbf0c3e4936ff9ea5e69bb5c4ecca90900a7c47f7805d22404a3a0 size: 740 The push refers to a repository [localhost:32843/kubevirt/registry-disk-v1alpha] 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 376d512574a4: Pushed 7971c2f81ae9: Pushed e7752b410e4c: Pushed devel: digest: sha256:87817a18003d1db65a3ea534498bc8852a5a73e33b7031d6d52e1a5bed2cef74 size: 948 The push refers to a repository [localhost:32843/kubevirt/cirros-registry-disk-demo] 38ddba522b21: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing e7752b410e4c: Mounted from kubevirt/registry-disk-v1alpha 376d512574a4: Mounted from kubevirt/registry-disk-v1alpha 7971c2f81ae9: Mounted from kubevirt/registry-disk-v1alpha 38ddba522b21: Pushed devel: digest: sha256:aebcc665c3bbbde9317e07f9978ecfc36746c3d8f837306b4d377c83c45f76ce size: 1160 The push refers to a repository [localhost:32843/kubevirt/fedora-cloud-registry-disk-demo] a5fc3effe51f: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 7971c2f81ae9: Mounted from kubevirt/cirros-registry-disk-demo e7752b410e4c: Mounted from kubevirt/cirros-registry-disk-demo 376d512574a4: Mounted from kubevirt/cirros-registry-disk-demo a5fc3effe51f: Pushed devel: digest: sha256:e4fa7efe0f682b97222f4ebe8827b0e288f76053dc0e7ed5818abd0b16fec5d3 size: 1161 The push refers to a repository [localhost:32843/kubevirt/alpine-registry-disk-demo] 673830a3d5b5: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing e7752b410e4c: Mounted from kubevirt/fedora-cloud-registry-disk-demo 376d512574a4: Mounted from kubevirt/fedora-cloud-registry-disk-demo 7971c2f81ae9: Mounted from kubevirt/fedora-cloud-registry-disk-demo 673830a3d5b5: Pushed devel: digest: sha256:18d5516e1b8d25dd4c1833eb0f89a2a7025982b0db6a38ab13809138f3a25658 size: 1160 The push refers to a repository [localhost:32843/kubevirt/subresource-access-test] a11064aa15a7: Preparing 2aaca144a3e2: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer 2aaca144a3e2: Pushed a11064aa15a7: Pushed devel: digest: sha256:8f18c7aa9eaf06891b3ae0a4add3e9bad9c038e8d46ca139e752c0d57d7e6bc9 size: 948 The push refers to a repository [localhost:32843/kubevirt/winrmcli] 3cd438b33e81: Preparing 8519683f2557: Preparing a29ba32ac0a1: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/subresource-access-test 3cd438b33e81: Pushed a29ba32ac0a1: Pushed 8519683f2557: Pushed devel: digest: sha256:fb72d4faa0090c6e4b5163b5c065e714fe607a4ac25e92f5dd9904a3948cf0e8 size: 1165 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.9.3-release/go/src/kubevirt.io/kubevirt' /usr/bin/docker-current: Error response from daemon: transport is closing. make: *** [cluster-build] Error 125 + make cluster-down ./cluster/down.sh Error response from daemon: Cannot kill container 0daa9aef9d35353068e3c5143f8b4d6302fdc17567b1776013c832df11aa4c58: rpc error: code = 14 desc = grpc: the connection is unavailable