+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release + [[ windows2016-release =~ openshift-.* ]] + [[ windows2016-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/07/25 09:48:17 Waiting for host: 192.168.66.101:22 2018/07/25 09:48:20 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 09:48:32 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0725 09:48:32.763852 1228 feature_gate.go:230] feature gates: &{map[]} I0725 09:48:32.888633 1228 kernel_validator.go:81] Validating kernel version I0725 09:48:32.888916 1228 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 54.505353 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:56a66c53ebeaa54ee33e58058476518bc22227f80a28bffdf43dd1eb8150060d + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted 2018/07/25 09:49:45 Waiting for host: 192.168.66.102:22 2018/07/25 09:49:48 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 09:49:56 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 09:50:01 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0725 09:50:02.319461 1192 kernel_validator.go:81] Validating kernel version I0725 09:50:02.319731 1192 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 51s v1.11.0 node02 Ready 23s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 52s v1.11.0 node02 Ready 24s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:35094/kubevirt/virt-controller:devel Untagged: localhost:35094/kubevirt/virt-controller@sha256:2b05980ca47246f596b31153e2699c1c60362496e37373ba94f1d73a01d41f33 Deleted: sha256:1e1fb733560589c79c88af4c90c434fc02b14d75c31561029a3d82de7a9ea7b0 Deleted: sha256:9728f275ce6f2b06df992c1ca242886b025311518cd3906ce89a9f58f156e07d Deleted: sha256:2f0ce64961daf97b4ae960087d3564dcb530b85a00555b4dede9cc2e0fbc8517 Deleted: sha256:ee437f682a24d575d296b54afd31712db8da95b0e5113364457b6cff394a60ab Untagged: localhost:35094/kubevirt/virt-launcher:devel Untagged: localhost:35094/kubevirt/virt-launcher@sha256:491752de4e0accf37b0cf2aad197a72f3dff34ae18f3bad8fa4f8215bf71d913 Deleted: sha256:b7da4c2c424edefa8fb010206c9dfb3ba6e9afe0a9c4ba00803aadf714e099af Deleted: sha256:2d7c5a7c4d8cac6e32393dad7e4086f46c75bb1303f1fc50ea5b083dbd1426d4 Deleted: sha256:cb70d438640dae162658dd7ecb4b4f1aac4dc0c6b9967196f8f18c28635cafe0 Deleted: sha256:43e03fbaa1b6cca3f26fab5d5d66c5632c6cdb0c000fa941171196ca5e63dfcc Deleted: sha256:714d1bcc5bb890187be36d5feb577e03b0ec67f47511ef1841a7e536caa8f20c Deleted: sha256:020a4f4d8dcbee9e5215e11cc2d2ef2841ba04a87a6f7c11cff7fffab27bb725 Deleted: sha256:dcc262039b99f437501e912f903a401f4479e82bb5919111040161f7e62028e6 Deleted: sha256:74d2a459136646dc95d99f109dca8e13382343a53e3d37371d284a7c31cfac96 Deleted: sha256:c1fc716935500cbe35697fb7d411c6962d8026537893be6ea670af7f4c02d81c Deleted: sha256:dd43cdc919f5e67d8fe5d279d34b70a3118a841675444805710209fbacfe7b64 Deleted: sha256:42a104beb456cb4d19031096e3992f4ba24862d2cb8f6fcd1435febc3c2ad928 Deleted: sha256:17b33d0ead99056c3720a0f097e693ddd7439a65241bde2e4988dc809f13d1c0 Untagged: localhost:35094/kubevirt/virt-handler:devel Untagged: localhost:35094/kubevirt/virt-handler@sha256:d5e648d1b09580e5b17433de8f7b96716860345a91233b789398f2612ccbd36c Deleted: sha256:aedd5e15a1279f5505caec74babc10dfb28ba188de541ca12831aa728006b456 Deleted: sha256:a61a98730970b439c47517b9c0fdc72f50c8542677cc3315a94bc4a81747d5da Deleted: sha256:b8881d4a02ab49aecae14ade81c300871f8481f6a43cacad54470053de90c294 Deleted: sha256:e3a0e05c7b0b1aaa56a36c00d0179d6e90787d06500130035666ea6142435f1d Untagged: localhost:35094/kubevirt/virt-api:devel Untagged: localhost:35094/kubevirt/virt-api@sha256:608d0927939b4576fe2cb54a49c0949c1acd1017de77ee549c38bb5316e1c616 Deleted: sha256:f71dcc2927af8af4bfac552296b238bc2570eca9d222e7b1bbc07bb386987635 Deleted: sha256:c2346b811268b9eb9799d30dd81e588b852bb5cec78321fa6b1a923bf264e78d Deleted: sha256:fb11ba7056ca837b0d7b1c9d99c0c1871721be76a218b62cfd90cb720c99210e Deleted: sha256:7ef5ec4ddee843c31359a6e9405b20d4df97249354fee3e6b0e8e3a951bd6cd3 Untagged: localhost:35094/kubevirt/subresource-access-test:devel Untagged: localhost:35094/kubevirt/subresource-access-test@sha256:838fd75eb180d638549905535f9f8bbde5239d0e7288a22f3fcbc703ebea0311 Deleted: sha256:5b1c1fd0aab57903593098d5a0246adc67ddfc068f4a8ecf295a8976388a440a Deleted: sha256:7f0e7437c968e6dcfcdef105c845d747e19a6039ce69106db3456b8be4720127 Deleted: sha256:45c28345fbd0c4c44d09cad73bd981b73a181848208d4ac39de9a35080f72a76 Deleted: sha256:0e931cc17cbcd5194f3a051d5a1f09d5c3e040856657bacda49cf09e7c7b3b53 Untagged: localhost:35094/kubevirt/example-hook-sidecar:devel Untagged: localhost:35094/kubevirt/example-hook-sidecar@sha256:630c9a5f18bf4e5edd59dd2f19a111b031b36c5d5ae75a5b9e97688446c9c00f Deleted: sha256:773a21c5a25e9f270e6aff6c20d8764929b2e1ce6897dd823d5c647a53a04004 Deleted: sha256:ffbb09e62ea0be8b393e175d53dd7e9cc17ba87e9b36e76d9625df915ff7fc88 Deleted: sha256:1ecb7773844836e2819ffe6d328b50ae5b6d44823178615f0c49e45d9cab1cbe Deleted: sha256:1788bf004f8f727e326c1e23f1f05b80809c2632a35fde92956e7f13fa0bec96 sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-windows2016-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:7fb8539d32771bf74786d31102b8c102fc61586b172276b4710c6944077751f4 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.35 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> e9589b9dbfb3 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 6526953b7273 Step 5/8 : USER 1001 ---> Using cache ---> 0da81e671cc6 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 670eedcf2c26 Removing intermediate container d9cc7cd1cf25 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in ea4ee5f1be9b ---> 0dd9c28ff3e4 Removing intermediate container ea4ee5f1be9b Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release0" '' "virt-controller" '' ---> Running in 08252b4e31d1 ---> 3f171f9a5c66 Removing intermediate container 08252b4e31d1 Successfully built 3f171f9a5c66 Sending build context to Docker daemon 42.63 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 8826ac178c51 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 5eb474bfa821 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> fce0a0a0770b Removing intermediate container 39805efa1f56 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 3a300679cfd2 Removing intermediate container 8f58bd58247f Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 4f6429d94266  ---> cb51d032764d Removing intermediate container 4f6429d94266 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 86aa969a45b0  ---> 1985c92a91d4 Removing intermediate container 86aa969a45b0 Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> ac3a632a4b82 Removing intermediate container c07bd9c80620 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in da9de2cabb95 ---> 3ebba918dce0 Removing intermediate container da9de2cabb95 Step 10/10 : LABEL "kubevirt-functional-tests-windows2016-release0" '' "virt-launcher" '' ---> Running in ca679adfba3d ---> 7b2be6cfc639 Removing intermediate container ca679adfba3d Successfully built 7b2be6cfc639 Sending build context to Docker daemon 41.65 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> f9efe51dd426 Removing intermediate container 694001b3f676 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 31efbc44d966 ---> 13449c9320ae Removing intermediate container 31efbc44d966 Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release0" '' "virt-handler" '' ---> Running in 157c5210e742 ---> 80d2072325f7 Removing intermediate container 157c5210e742 Successfully built 80d2072325f7 Sending build context to Docker daemon 38.75 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 1a58ff1483fa Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 87e30c5b4065 Step 5/8 : USER 1001 ---> Using cache ---> e889af541bd0 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 3e435d133e48 Removing intermediate container 73240ef135d2 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 64ac95642339 ---> 3ae66a3cd1c7 Removing intermediate container 64ac95642339 Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release0" '' "virt-api" '' ---> Running in aafb1f3a7829 ---> fb1172d72e42 Removing intermediate container aafb1f3a7829 Successfully built fb1172d72e42 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/7 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 8e1d737ded1f Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 104e48aa676f Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 4ed9f69e6653 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-windows2016-release0" '' ---> Using cache ---> 0586ecc0365a Successfully built 0586ecc0365a Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/5 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> d130857891a9 Step 5/5 : LABEL "kubevirt-functional-tests-windows2016-release0" '' "vm-killer" '' ---> Using cache ---> cbfc3cdabc83 Successfully built cbfc3cdabc83 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 496290160351 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 3b36b527fef8 Step 3/7 : ENV container docker ---> Using cache ---> b3ada414d649 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 337be6171fcb Step 5/7 : ADD entry-point.sh / ---> Using cache ---> a98a961fa5a1 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 19baf5d1aab8 Step 7/7 : LABEL "kubevirt-functional-tests-windows2016-release0" '' "registry-disk-v1alpha" '' ---> Using cache ---> aaa0249a4a79 Successfully built aaa0249a4a79 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:35166/kubevirt/registry-disk-v1alpha:devel ---> aaa0249a4a79 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 6774d45318b3 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 0a9558f459b1 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release0" '' ---> Using cache ---> 2676d5c090a6 Successfully built 2676d5c090a6 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:35166/kubevirt/registry-disk-v1alpha:devel ---> aaa0249a4a79 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> ee2c1b8f8132 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> cd441625add5 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release0" '' ---> Using cache ---> 8bd344642910 Successfully built 8bd344642910 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:35166/kubevirt/registry-disk-v1alpha:devel ---> aaa0249a4a79 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> ee2c1b8f8132 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 55741e0f2607 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-windows2016-release0" '' ---> Using cache ---> f7bb95e38211 Successfully built f7bb95e38211 Sending build context to Docker daemon 35.56 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> f9cd90a6a0ef Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> df6f2d83c1d6 Step 5/8 : USER 1001 ---> Using cache ---> 56a7b7e6b8ff Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 280d8f9a5421 Removing intermediate container 2b274d4b9019 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 1151b3cf80df ---> a59bdff13a0a Removing intermediate container 1151b3cf80df Step 8/8 : LABEL "kubevirt-functional-tests-windows2016-release0" '' "subresource-access-test" '' ---> Running in 3f24997f543c ---> eceb2fb910c4 Removing intermediate container 3f24997f543c Successfully built eceb2fb910c4 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 82fe13c41cb7 Step 3/9 : ENV container docker ---> Using cache ---> 6e6b2ef85e92 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> c1e9e769c4ba Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> 6729c465203a Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 2aee087083e8 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> e3795172dd73 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 0de2fc4b917f Step 9/9 : LABEL "kubevirt-functional-tests-windows2016-release0" '' "winrmcli" '' ---> Using cache ---> 306a0a247da3 Successfully built 306a0a247da3 Sending build context to Docker daemon 36.77 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> b730b4ed65df Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> cfcd8a07b4e9 Removing intermediate container 7a536b483856 Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 846c31a79caf ---> 2681026e9158 Removing intermediate container 846c31a79caf Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-windows2016-release0" '' ---> Running in 48db1e87b271 ---> b68abb4b2dea Removing intermediate container 48db1e87b271 Successfully built b68abb4b2dea hack/build-docker.sh push The push refers to a repository [localhost:35166/kubevirt/virt-controller] 78c18003b31e: Preparing ff9b9e61b9df: Preparing 891e1e4ef82a: Preparing ff9b9e61b9df: Pushed 78c18003b31e: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:ad19fc92fd04116f1b3a94584f4c053a4cec91f914edfad91f781478152e062b size: 949 The push refers to a repository [localhost:35166/kubevirt/virt-launcher] 39fa0a29b274: Preparing deac96ff1a3e: Preparing 53ec3a2bba64: Preparing cb0208fcb710: Preparing 44a071de41de: Preparing cfcba35fba84: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing da38cf808aa5: Waiting 891e1e4ef82a: Preparing 186d8b3e4fd8: Waiting fa6154170bf5: Waiting b83399358a92: Waiting 5eefb9960a36: Waiting deac96ff1a3e: Pushed cb0208fcb710: Pushed 39fa0a29b274: Pushed da38cf808aa5: Pushed b83399358a92: Pushed fa6154170bf5: Pushed 186d8b3e4fd8: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 53ec3a2bba64: Pushed cfcba35fba84: Pushed 44a071de41de: Pushed 5eefb9960a36: Pushed devel: digest: sha256:404d4555256914c3715bc55b8eaac738755b9a90eb73c5c07aea29479d4a6be6 size: 2828 The push refers to a repository [localhost:35166/kubevirt/virt-handler] 8377af6c9968: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 8377af6c9968: Pushed devel: digest: sha256:b75959e54eb58ebc13cbae67626001e7c6fa6c514c52a0b0506a0cd348f8a1a5 size: 741 The push refers to a repository [localhost:35166/kubevirt/virt-api] 20c3bb7e5948: Preparing 5f1414e2d326: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 5f1414e2d326: Pushed 20c3bb7e5948: Pushed devel: digest: sha256:03c8c3bc7a002d3579c9d2e41f50d19b3a7e591a031c4aaf079df04b5388130c size: 948 The push refers to a repository [localhost:35166/kubevirt/disks-images-provider] 2e0da09ca39e: Preparing 4fe8becbb60f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 2e0da09ca39e: Pushed 4fe8becbb60f: Pushed devel: digest: sha256:2ca3322778b1bafc926a644d1671d71004961f030e1a1d6f4f8ea8802294a70c size: 948 The push refers to a repository [localhost:35166/kubevirt/vm-killer] 7b031fa3032f: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 7b031fa3032f: Pushed devel: digest: sha256:b5e4c0c9df950559199e5f010a591919a6e6c468c8902d0d4ff9e209db11daac size: 740 The push refers to a repository [localhost:35166/kubevirt/registry-disk-v1alpha] bfd12fa374fa: Preparing 18ac8ad2aee9: Preparing 132d61a890c5: Preparing bfd12fa374fa: Pushed hack/build-docker.sh: line 38: 15807 Terminated docker $target ${docker_prefix}/${BIN_NAME}:${docker_tag} make[1]: *** [push] Error 1