+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.10.4-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.11.0 + KUBEVIRT_PROVIDER=k8s-1.11.0 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/25 09:48:37 Waiting for host: 192.168.66.101:22 2018/07/25 09:48:40 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 09:48:49 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 09:48:54 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf I0725 09:48:55.217789 1114 feature_gate.go:230] feature gates: &{map[]} [init] using Kubernetes version: v1.11.0 [preflight] running pre-flight checks I0725 09:48:55.416160 1114 kernel_validator.go:81] Validating kernel version I0725 09:48:55.416492 1114 kernel_validator.go:96] Validating kernel config [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [node01 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01 localhost] and IPs [192.168.66.101 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 54.512228 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node node01 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node node01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation [bootstraptoken] using token: abcdef.1234567890123456 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:aa35bdc93f3e69b820fd1125409c479fa8d54de4f604b7e563d6998926cd477f + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node/node01 untainted 2018/07/25 09:50:06 Waiting for host: 192.168.66.102:22 2018/07/25 09:50:09 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 09:50:17 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/25 09:50:23 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/07/25 09:50:28 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0725 09:50:30.893042 1236 kernel_validator.go:81] Validating kernel version I0725 09:50:30.893670 1236 kernel_validator.go:96] Validating kernel config [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node02" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 38739968 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 59s v1.11.0 node02 Ready 23s v1.11.0 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 1m v1.11.0 node02 Ready 24s v1.11.0 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33215/kubevirt/virt-controller:devel Untagged: localhost:33215/kubevirt/virt-controller@sha256:99722fb761b10ced27d70be1607a3b0d982f306a7335e07291283f610b14b111 Deleted: sha256:3904869efb71a1ba41fd2c89cf8d0bb9debed06679437d71915fb69a21070c0c Deleted: sha256:c6d99f2705347679759d00f063cafdb037d47dee552242eec20bd512f95f390a Deleted: sha256:a2a6aa2d8e91d06ce879eda3f26295060c9d7d050740e9dc03e889b709a4a8fb Deleted: sha256:7747fc733caf2022a8714f2203cdd60345f9b7a63cb629ecc97c993400ecec9a Untagged: localhost:33215/kubevirt/virt-launcher:devel Untagged: localhost:33215/kubevirt/virt-launcher@sha256:3d97fb944187c4bbda1f90ae697c7ec54d5d071dad6baca2be142710a91926e6 Deleted: sha256:3840c94a6d3693e684ad9052be1e60d31477bce598e40e910b22e936c97909a0 Deleted: sha256:114dfa905b12a4e984fe5516f2a52394309b45053c9b2bd58ff5cb5c856eb263 Deleted: sha256:ccf1cc83ffadf10d600e87009f0bf3231ebabf528881d5e7b653e142cee2720f Deleted: sha256:8816884d0705afb9fe772c4e5dc2a03bdb9e204ce769dc6027c5476e938e868f Deleted: sha256:89acc0318efb9265d1532988bd9f860d53d5fad1e203953837d95063af299cc6 Deleted: sha256:cb5d43b984cfb010222e7ef8c147e39a5e83c15471d97fc8d172bf5cd76058f4 Deleted: sha256:0500511602161b95272634b8e4079d913ee4a148a28f80ce71958195bf4b8988 Deleted: sha256:1e01635c8955f07de4dc3ee1bd4d1850a5ec5c20cb4c4e580526b22b8ac2551f Deleted: sha256:e88d72929ed41d03894992812604839d8dbef3ef22f2460da139282490aaa337 Deleted: sha256:4a5ff1529fdaca6b6284a39793e22c17e6d3e17d1d71727fba8bde106f80fa04 Deleted: sha256:19537853bfdd5b616ef4489f5ad3e36d2186a8b4e5e8bbbfe0db47c2db2c4bad Deleted: sha256:8b462c660a48b5aa4315d402c232e24ea3dbe0915953ef4f14951cb94eba9a9a Untagged: localhost:33215/kubevirt/virt-handler:devel Untagged: localhost:33215/kubevirt/virt-handler@sha256:704d2653ca150a6c5419f83eeb400699df7422e924b77f6c5ef8caaeeb3cfc65 Deleted: sha256:0fe5a6daa2dbab337b70a0e9131a293bb97e1a2927b45d7c626dedaab377ee37 Deleted: sha256:cd787523d7e772d34bf3beeca601f9f918ef11929e4dc1d2414878bb401a9e04 Deleted: sha256:dcafd349df5ff7a0f1b3627da3195cba3f9dc6fd023405b9e8b2fe5c439369a4 Deleted: sha256:7b4a6342bc9f786b20ae350d5252e084033b234f61ee130c2cc4fb3a2b195873 Untagged: localhost:33215/kubevirt/virt-api:devel Untagged: localhost:33215/kubevirt/virt-api@sha256:7f4ac5defe8d2337f6cd74559159dada7bf0abe7566d0cd66721a2f2dcfdc399 Deleted: sha256:bee08aecd600e6d7e6f1ae47de78b05fb38006b0c5b22ea72d5f3268fdad8220 Deleted: sha256:672ec8ee960c44fd5fd73217bad5348351a2ae478910c06461356e63e1b64513 Deleted: sha256:e9e5be56bcbbbd6d78a245e183d1d292c92bc86c7fec08a709b4997867ea33e9 Deleted: sha256:9d83972eec9f9af321f33a2c95d9a7b4d042111a13781d3803c04ce2b8871372 Untagged: localhost:33215/kubevirt/subresource-access-test:devel Untagged: localhost:33215/kubevirt/subresource-access-test@sha256:0e7e156ef24f6402bd76691db9b47a0106382841e122287318912fecb2066d20 Deleted: sha256:87e724360f7744d1a40049edc0ab32d58ae1946f03fb810a56a6bfbc7b48045a Deleted: sha256:32540828d3104a9807951146811de664fc2bf2b621d98b57416c881e1e93d1e3 Deleted: sha256:037fbfe67d95714ebaa54aae33a3ca1c8a0a8908d8e3858ffd907ec99cef24a9 Deleted: sha256:8b1b8bdfc51305026e22d7728b2a7e379cf5b39fdabaeef37576a8b6aba851e4 Untagged: localhost:33215/kubevirt/example-hook-sidecar:devel Untagged: localhost:33215/kubevirt/example-hook-sidecar@sha256:e768ebb3460bb931b59f2e18ed63220c9b7f44d06191e715fa7d0c5337000d23 Deleted: sha256:7d01c136d0ac0098d7907c45a7fd11ef61697371fa2ed497ccf4ee4019126227 Deleted: sha256:fa15a800e7906b7348e34999cb4ed285856b2c1994ecb4cdf63fb02d3dea9077 Deleted: sha256:8a4b21d32f7d7bd8701447f0f05e8d639e4d6e83d02465ed0d24ed1f8b444e73 Deleted: sha256:7d7221e3c06fc6993ab60a41cf43e610ede20986a396d15e770c1c2c20d102d2 sha256:8314c812ee3200233db076e79036b39759d30fc3dd7fe921b323b19b14306fd6 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:8314c812ee3200233db076e79036b39759d30fc3dd7fe921b323b19b14306fd6 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 40.35 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 1ac62e99a9e7 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> c7b69424a0c5 Step 5/8 : USER 1001 ---> Using cache ---> e60ed5d8e78a Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 5e42a5a1691d Removing intermediate container 36791cc4034e Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 46756fbdb33d ---> 9bd51b70dc1d Removing intermediate container 46756fbdb33d Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-controller" '' ---> Running in d07b5c9692da ---> 3ce43ebc960c Removing intermediate container d07b5c9692da Successfully built 3ce43ebc960c Sending build context to Docker daemon 42.63 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 65f548d54a2e Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 04ae26de19c4 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> f59450c73503 Removing intermediate container 9ab3cae9b25c Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 3dabc2b95233 Removing intermediate container caa70593c281 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in ebec34229769  ---> db9f33196811 Removing intermediate container ebec34229769 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 41b50b779b95  ---> 400fb2b7c4cb Removing intermediate container 41b50b779b95 Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 19f59dca367a Removing intermediate container 1f7ced9eae50 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in d0005c150aa2 ---> 480afa1989aa Removing intermediate container d0005c150aa2 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-launcher" '' ---> Running in e8db4abb2dec ---> 40a7f0a8ee91 Removing intermediate container e8db4abb2dec Successfully built 40a7f0a8ee91 Sending build context to Docker daemon 41.65 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 2a0fdadb1a31 Removing intermediate container 2aab3f5d1b5b Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in d08b95715526 ---> 503c1c6bd10e Removing intermediate container d08b95715526 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-handler" '' ---> Running in 904ce440eb0e ---> cbf04a101c99 Removing intermediate container 904ce440eb0e Successfully built cbf04a101c99 Sending build context to Docker daemon 38.75 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 830d77e8a3bb Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 7075b0c3cdfd Step 5/8 : USER 1001 ---> Using cache ---> 4e21374fdc1d Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 595ab50416ad Removing intermediate container 5897fabbfded Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 8f47f32050ff ---> d8f3eac01684 Removing intermediate container 8f47f32050ff Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-api" '' ---> Running in e906afa12769 ---> 8b9ed9d16427 Removing intermediate container e906afa12769 Successfully built 8b9ed9d16427 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/7 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 3f571283fdaa Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 2722b024d103 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 8458081a089b Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 95c52cb94d0f Successfully built 95c52cb94d0f Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/5 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 006e94a74def Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "vm-killer" '' ---> Using cache ---> b96459304131 Successfully built b96459304131 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 496290160351 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 081acc82039b Step 3/7 : ENV container docker ---> Using cache ---> 87a43203841c Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> bbc83781e0a9 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> c588d7a778a6 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> e28b44b64988 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "registry-disk-v1alpha" '' ---> Using cache ---> 15dee9c3f228 Successfully built 15dee9c3f228 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33583/kubevirt/registry-disk-v1alpha:devel ---> 15dee9c3f228 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 59e724975b36 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 5aab327c7d42 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 6267f6181ea0 Successfully built 6267f6181ea0 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33583/kubevirt/registry-disk-v1alpha:devel ---> 15dee9c3f228 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 7226abe32103 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> e77a7d24125c Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 1f65ea7e845f Successfully built 1f65ea7e845f Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33583/kubevirt/registry-disk-v1alpha:devel ---> 15dee9c3f228 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 7226abe32103 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 69497b9af146 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 696b2b381ecc Successfully built 696b2b381ecc Sending build context to Docker daemon 35.56 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 939ec18dc9a4 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 52b6bf037d32 Step 5/8 : USER 1001 ---> Using cache ---> 1e1560e0af32 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> dd6426e1c018 Removing intermediate container 2fc9b86a0f5f Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 3663533ca436 ---> d52eaf22b83e Removing intermediate container 3663533ca436 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "subresource-access-test" '' ---> Running in d7a6d5a8ac8d ---> e9902136a40a Removing intermediate container d7a6d5a8ac8d Successfully built e9902136a40a Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/9 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 3129352c97b1 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> fbcd5a15f974 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 6e560dc836a0 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 8a916bbc2352 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 72d00ac082db Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "winrmcli" '' ---> Using cache ---> a78ab99f56bf Successfully built a78ab99f56bf Sending build context to Docker daemon 36.77 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0ae71e3c9e56 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 5c24481a38b6 Removing intermediate container 3a65c84334ac Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 6a0b0554cdcc ---> 5dceb17a4e59 Removing intermediate container 6a0b0554cdcc Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Running in b661cbf305a3 ---> 2bc2ac3a2d42 Removing intermediate container b661cbf305a3 Successfully built 2bc2ac3a2d42 hack/build-docker.sh push The push refers to a repository [localhost:33583/kubevirt/virt-controller] 72711d2debc7: Preparing d07058c760ad: Preparing 891e1e4ef82a: Preparing d07058c760ad: Pushed 72711d2debc7: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:cf94f4bcb8e9c0e1c55fa196b939ebd2683f80880f60442016d9e99e57af2c52 size: 949 The push refers to a repository [localhost:33583/kubevirt/virt-launcher] 882c41122551: Preparing 12979f5df33e: Preparing b59ee165c872: Preparing 8d5f545fffb9: Preparing cc2350ca1023: Preparing 53f12636d41e: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 891e1e4ef82a: Preparing 53f12636d41e: Waiting da38cf808aa5: Waiting b83399358a92: Waiting 5eefb9960a36: Waiting fa6154170bf5: Waiting 186d8b3e4fd8: Waiting 891e1e4ef82a: Waiting 8d5f545fffb9: Pushed 882c41122551: Pushed 12979f5df33e: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller b59ee165c872: Pushed 53f12636d41e: Pushed cc2350ca1023: Pushed 5eefb9960a36: Pushed devel: digest: sha256:1f7c276ad8ceee74b246e525c8c9b38a0e5edb968763b8fb34404c1b9a2453ce size: 2828 The push refers to a repository [localhost:33583/kubevirt/virt-handler] 5bae6def574c: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 5bae6def574c: Pushed devel: digest: sha256:0eed088efb368cf64eb0fddd8b64a4ff484420995e662b12717a0b981f0cb3aa size: 741 The push refers to a repository [localhost:33583/kubevirt/virt-api] ede0636423dc: Preparing 25755ffecaf3: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 25755ffecaf3: Pushed ede0636423dc: Pushed devel: digest: sha256:05b3dd89d5c8b5bc4c50a1dc456cc00a79c660da6e5289aa6f973f28a15f0c47 size: 948 The push refers to a repository [localhost:33583/kubevirt/disks-images-provider] 5ffe52947a94: Preparing a1bc751fc8a2: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 5ffe52947a94: Pushed hack/build-docker.sh: line 38: 5325 Terminated docker $target ${docker_prefix}/${BIN_NAME}:${docker_tag}