+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev + [[ vagrant-dev =~ openshift-.* ]] + export PROVIDER=k8s-1.9.3 + PROVIDER=k8s-1.9.3 + export VAGRANT_NUM_NODES=1 + VAGRANT_NUM_NODES=1 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh WARNING: You're not using the default seccomp profile WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled kubevirt-functional-tests-vagrant-dev0-node01 2018/04/10 06:49:24 Waiting for host: 192.168.66.101:22 2018/04/10 06:49:27 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/04/10 06:49:40 Connected to tcp://192.168.66.101:22 [init] Using Kubernetes version: v1.9.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 30.503462 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --discovery-token-ca-cert-hash sha256:4f0af0ee6574ea93ea4199aa15335a5c456fd97422ed5240c2e256ac8c6de409 clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created node "node01" untainted kubevirt-functional-tests-vagrant-dev0-node02 2018/04/10 06:50:25 Waiting for host: 192.168.66.102:22 2018/04/10 06:50:28 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/04/10 06:50:40 Connected to tcp://192.168.66.102:22 [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. 2018/04/10 06:50:42 Waiting for host: 192.168.66.101:22 2018/04/10 06:50:42 Connected to tcp://192.168.66.101:22 Warning: Permanently added '[127.0.0.1]:33024' (ECDSA) to the list of known hosts. Warning: Permanently added '[127.0.0.1]:33024' (ECDSA) to the list of known hosts. Cluster "kubernetes" set. Cluster "kubernetes" set. ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep -v Ready + '[' -n '' ']' + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 NotReady master 31s v1.9.3 node02 NotReady 3s v1.9.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:32843/kubevirt/virt-controller:devel Untagged: localhost:32843/kubevirt/virt-controller@sha256:c4d21271ad970410b869e04ac3d48755c6121812c0723bcec1d7d81dde95a038 Deleted: sha256:2a88f731ec2cd520884832548865bacbeea6944fa1e21eaffbab43adc3e88580 Deleted: sha256:f49e42b30b5fbdce6b3ab18b5247a86ef7b32c22574a24077b090939d9b920c7 Deleted: sha256:4c73ec20a285e5915be03e6e99a3c602e83b8e5bf86ffaeb9cc82994b4ecad08 Deleted: sha256:8b34976f589aaa7b89329fbe31ae521a00b24de868a56f37c0a27cfff21a09eb Untagged: localhost:32843/kubevirt/virt-launcher:devel Untagged: localhost:32843/kubevirt/virt-launcher@sha256:cdd93671d8f66d8b69bbd88f299576aff2eaeb7ccf93fe9d14e1eb8fcf79fcf6 Deleted: sha256:39dd13b9b55def025772e4227f01b159a4e5a0c516063bd62084441184647d80 Deleted: sha256:eabc384f50fcc2fd2148f8970ee2c47a1e69a1fb7defe505c05a82ee9b05d6e7 Deleted: sha256:f4d2c902146e7c54693ecb0ffe0d5a19a3466b0074b2f3d52c40a962e665292a Deleted: sha256:fb6d93e64cb19423e3f927620240ad5ccf990afa3d21f90449aaaf0ff22f4bb8 Deleted: sha256:24282b2df7249b4814eef6bf24b237188e8e1acfabc04d50e6ce31072a603bd9 Deleted: sha256:c0f8d1f6dc51e491818c5f46e276b76cee348b51d2b0929351218437f147681f Deleted: sha256:0bf1eb6bdc2c66ee221977c5af3a853ae463003b732ea8ad795dafc0e19f009f Deleted: sha256:6e6d2fb35e4eb59dad40a99f07e1d36edf224ce0f2dc5313d6a891c021e089a0 Deleted: sha256:0b2fac83b77f233b757d1702af99ae1c2a7a48c2349eaba21962cc27035731eb Deleted: sha256:c857d5dceddddfed24047971501cc248a76fbe34112ad7b4550140f2bb8e73bc Deleted: sha256:d1581987537bf4bda7279d988bdb3fb8aeae08c9e379787701643ffa54514a86 Deleted: sha256:4dada5bfedb59bb09d15e03241fbfc6625ed14cb7cc41496a5dbd27dea38de99 Deleted: sha256:7682bde4a6f8f2bfda3c504569b1a9ee54bc28fc1c51eeb334efc1e1fb6c8a5f Deleted: sha256:73ab9a5392748d77fb7e49daa80920b31a951a6450f44cc898d8c7e76368c41e Deleted: sha256:be3bbcd7b6ac0a1dd3c422f94b68edcdf0574e475653caffd34b67a8dba2c38a Deleted: sha256:07f609fcf082b059d48526e1b156e24b5b86afb82a29c7fed92804c1bfd41ed4 Untagged: localhost:32843/kubevirt/virt-handler:devel Untagged: localhost:32843/kubevirt/virt-handler@sha256:128d7611152dc2072f6fc163c5879f63cdad0cae54c1d6b438b8aedd7fab1b9b Deleted: sha256:85ccb6744dce491ed176dad0ca1c6e5a30517cd9102d766a70bc43979e9c2578 Deleted: sha256:32618c6e72c9983861183c4fc8c119154175803757d63225feee0388cee4b037 Deleted: sha256:87f65c2d656146174fb315a4102bc5e62b603eb4b25f35d464fdeb220bcc63a9 Deleted: sha256:b2758d55bbea63c7d0ab178f72868f585b592d50fbe61f533845a7c15b9c8595 Untagged: localhost:32843/kubevirt/virt-api:devel Untagged: localhost:32843/kubevirt/virt-api@sha256:a1a8328a247b991fc49818fa563668d5a4d0b9baa1d192e86e5c55060bfb83cf Deleted: sha256:96d7234ee6405264693b0fc1a40f26741e97269eb54b5e36c26a4bff523c670a Deleted: sha256:3043ddca18b11717df6e95c63b7cd2010ebd731904d583514d27bdf9d9378291 Deleted: sha256:f5716c82c7206acb68cf45f8022f2554e561a6378542e49f96c66e23659ff282 Deleted: sha256:778b2acdb4ec81ad3a1682d9db4ff7f9d264fd2d05f3b794587f31e516fdafc0 Untagged: localhost:32843/kubevirt/subresource-access-test:devel Untagged: localhost:32843/kubevirt/subresource-access-test@sha256:8387b52f5c117c61440d1bdf38be7b96a1e66b8da22fb2020e702bde51ba91bd Deleted: sha256:b6453bc8503472d9088490ca572dd5bcd66cc14b1e65af4f349603bfc9840f9f Deleted: sha256:9baf9b2a5549ce4196019d6cc4077271f1ae433e970fc150db6907063d7df982 Deleted: sha256:5cfe1a4493fa3f8ff486e4f86d679e156ae8ad4b7070a3e6b56219a461f62918 Deleted: sha256:6a93037ddf5606534f88f76561a716147e36796d26196a9891bbda2161f93271 sha256:c9cd67dd05efdb07ffa24a86eeeca6419b8df7c6db7187fd750b9f83be5ac0b4 go version go1.9.2 linux/amd64 rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready skipping directory . go version go1.9.2 linux/amd64 702730c96a6b887a569876970dae0619b86c07989e79967595c08b739d2c9f1d 702730c96a6b887a569876970dae0619b86c07989e79967595c08b739d2c9f1d make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && ./hack/build-go.sh install " sha256:c9cd67dd05efdb07ffa24a86eeeca6419b8df7c6db7187fd750b9f83be5ac0b4 go version go1.9.2 linux/amd64 skipping directory . go version go1.9.2 linux/amd64 Compiling tests... compiled tests.test f1da035fa3517116916dcf17c4a42c2381553b8cd279204d1ff39ac543f3c533 f1da035fa3517116916dcf17c4a42c2381553b8cd279204d1ff39ac543f3c533 hack/build-docker.sh build sending incremental file list ./ Dockerfile kubernetes.repo sent 854 bytes received 53 bytes 1814.00 bytes/sec total size is 1167 speedup is 1.29 Sending build context to Docker daemon 35.7 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 26d08b2f873c Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> d73a973f334b Step 5/8 : USER 1001 ---> Using cache ---> 92d14ee3fb2e Step 6/8 : COPY virt-controller /virt-controller ---> cd89aab5cc96 Removing intermediate container 56b5cbb42585 Step 7/8 : ENTRYPOINT /virt-controller ---> Running in 42dbf98cc803 ---> 0c37c6fb7451 Removing intermediate container 42dbf98cc803 Step 8/8 : LABEL "kubevirt-functional-tests-vagrant-dev0" '' "virt-controller" '' ---> Running in 97b8f0cd589f ---> 40660678d881 Removing intermediate container 97b8f0cd589f Successfully built 40660678d881 sending incremental file list ./ Dockerfile entrypoint.sh kubevirt-sudo libvirtd.sh sh.sh sock-connector sent 3289 bytes received 129 bytes 6836.00 bytes/sec total size is 5479 speedup is 1.60 Sending build context to Docker daemon 37.44 MB Step 1/14 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/14 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> fdd57f83e446 Step 3/14 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> b824b882c94e Step 4/14 : COPY sock-connector /sock-connector ---> Using cache ---> 8cbe8006a6c1 Step 5/14 : COPY sh.sh /sh.sh ---> Using cache ---> 2e47a3c4a4f3 Step 6/14 : COPY virt-launcher /virt-launcher ---> c15aaa67266c Removing intermediate container 70cf5a0a464e Step 7/14 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> f12c117be3c1 Removing intermediate container c3b3e454fa66 Step 8/14 : RUN chmod 0640 /etc/sudoers.d/kubevirt ---> Running in b7b410c43767  ---> ea69d3fd28eb Removing intermediate container b7b410c43767 Step 9/14 : RUN rm -f /libvirtd.sh ---> Running in 44059609dae4  ---> 9966e5b19074 Removing intermediate container 44059609dae4 Step 10/14 : COPY libvirtd.sh /libvirtd.sh ---> 5598cf1c128c Removing intermediate container e6f8f5dfb34e Step 11/14 : RUN chmod a+x /libvirtd.sh ---> Running in 3e95af272ca2  ---> 7b834c5ca06e Removing intermediate container 3e95af272ca2 Step 12/14 : COPY entrypoint.sh /entrypoint.sh ---> c420b88fcf94 Removing intermediate container c1cea412c472 Step 13/14 : ENTRYPOINT /entrypoint.sh ---> Running in 545ac71aa9ec ---> 6daf902902a7 Removing intermediate container 545ac71aa9ec Step 14/14 : LABEL "kubevirt-functional-tests-vagrant-dev0" '' "virt-launcher" '' ---> Running in 77ecf1aa8b91 ---> 30a16715aca9 Removing intermediate container 77ecf1aa8b91 Successfully built 30a16715aca9 sending incremental file list ./ Dockerfile sent 585 bytes received 34 bytes 1238.00 bytes/sec total size is 775 speedup is 1.25 Sending build context to Docker daemon 36.37 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/5 : COPY virt-handler /virt-handler ---> 70c8d3f70565 Removing intermediate container 18a14fa4068d Step 4/5 : ENTRYPOINT /virt-handler ---> Running in 054fc295302c ---> 9accdbf7a3c9 Removing intermediate container 054fc295302c Step 5/5 : LABEL "kubevirt-functional-tests-vagrant-dev0" '' "virt-handler" '' ---> Running in 01594c7cc0ba ---> 7df13e3cfe84 Removing intermediate container 01594c7cc0ba Successfully built 7df13e3cfe84 sending incremental file list ./ Dockerfile sent 864 bytes received 34 bytes 1796.00 bytes/sec total size is 1377 speedup is 1.53 Sending build context to Docker daemon 36.12 MB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/9 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 1fa3abd9386c Step 4/9 : WORKDIR /home/virt-api ---> Using cache ---> 18a3eb497210 Step 5/9 : USER 1001 ---> Using cache ---> a0af12a5aa54 Step 6/9 : RUN curl -OL https://github.com/swagger-api/swagger-ui/tarball/38f74164a7062edb5dc80ef2fdddda24f3f6eb85/swagger-ui.tar.gz && mkdir swagger-ui && tar xf swagger-ui.tar.gz -C swagger-ui --strip-components 1 && mkdir third_party && mv swagger-ui/dist third_party/swagger-ui && rm -rf swagger-ui && sed -e 's@"http://petstore.swagger.io/v2/swagger.json"@"/swaggerapi/"@' -i third_party/swagger-ui/index.html && rm swagger-ui.tar.gz && rm -rf swagger-ui ---> Using cache ---> f1fdc4d03e6f Step 7/9 : COPY virt-api /virt-api ---> f2ca2e4cc0cc Removing intermediate container 1df78e8a7837 Step 8/9 : ENTRYPOINT /virt-api ---> Running in db54c3a303f2 ---> f90ea5df2fa5 Removing intermediate container db54c3a303f2 Step 9/9 : LABEL "kubevirt-functional-tests-vagrant-dev0" '' "virt-api" '' ---> Running in a4f4de88db06 ---> 956b089bda7c Removing intermediate container a4f4de88db06 Successfully built 956b089bda7c sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/cmd/iscsi-demo-target-tgtd ./ Dockerfile run-tgt.sh sent 2185 bytes received 53 bytes 4476.00 bytes/sec total size is 3992 speedup is 1.78 Sending build context to Docker daemon 6.656 kB Step 1/10 : FROM fedora:27 ---> 9110ae7f579f Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/10 : ENV container docker ---> Using cache ---> d0b0dc01cb5d Step 4/10 : RUN dnf -y install scsi-target-utils bzip2 e2fsprogs ---> Using cache ---> 35c00214c275 Step 5/10 : RUN mkdir -p /images ---> Using cache ---> e3e179183ea6 Step 6/10 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/1-alpine.img ---> Using cache ---> e86b61826c05 Step 7/10 : ADD run-tgt.sh / ---> Using cache ---> db2dc53efd9e Step 8/10 : EXPOSE 3260 ---> Using cache ---> f2767bc543c9 Step 9/10 : CMD /run-tgt.sh ---> Using cache ---> c066b080f396 Step 10/10 : LABEL "iscsi-demo-target-tgtd" '' "kubevirt-functional-tests-vagrant-dev0" '' ---> Using cache ---> a62e09a90439 Successfully built a62e09a90439 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/cmd/vm-killer ./ Dockerfile sent 602 bytes received 34 bytes 1272.00 bytes/sec total size is 787 speedup is 1.24 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/5 : ENV container docker ---> Using cache ---> d0b0dc01cb5d Step 4/5 : RUN dnf -y install procps-ng && dnf -y clean all ---> Using cache ---> 35c59baf794d Step 5/5 : LABEL "kubevirt-functional-tests-vagrant-dev0" '' "vm-killer" '' ---> Using cache ---> d9b634127cd2 Successfully built d9b634127cd2 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/cmd/registry-disk-v1alpha ./ Dockerfile entry-point.sh sent 1529 bytes received 53 bytes 3164.00 bytes/sec total size is 2482 speedup is 1.57 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> bcec0ae8107e Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> f20a31819d03 Step 3/7 : ENV container docker ---> Using cache ---> 96277a0619bb Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> f43fc40caf89 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> a7c8be03fcd6 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 3129ff620dd1 Step 7/7 : LABEL "kubevirt-functional-tests-vagrant-dev0" '' "registry-disk-v1alpha" '' ---> Using cache ---> 10092ac302b0 Successfully built 10092ac302b0 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/cmd/cirros-registry-disk-demo ./ Dockerfile sent 630 bytes received 34 bytes 1328.00 bytes/sec total size is 825 speedup is 1.24 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33023/kubevirt/registry-disk-v1alpha:devel ---> 10092ac302b0 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> ecf612e37866 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 0700689361d5 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-vagrant-dev0" '' ---> Using cache ---> cece58b6602b Successfully built cece58b6602b sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/cmd/fedora-cloud-registry-disk-demo ./ Dockerfile sent 677 bytes received 34 bytes 1422.00 bytes/sec total size is 926 speedup is 1.30 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33023/kubevirt/registry-disk-v1alpha:devel ---> 10092ac302b0 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0e16fcde7122 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 3d1109633a8f Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-vagrant-dev0" '' ---> Using cache ---> 6366d3566b01 Successfully built 6366d3566b01 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/cmd/alpine-registry-disk-demo ./ Dockerfile sent 639 bytes received 34 bytes 1346.00 bytes/sec total size is 866 speedup is 1.29 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33023/kubevirt/registry-disk-v1alpha:devel ---> 10092ac302b0 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0e16fcde7122 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> d5a5d9b06f90 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-vagrant-dev0" '' ---> Using cache ---> 4c2752d6cfc7 Successfully built 4c2752d6cfc7 sending incremental file list ./ Dockerfile sent 660 bytes received 34 bytes 1388.00 bytes/sec total size is 918 speedup is 1.32 Sending build context to Docker daemon 33.59 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> b2e31dc7e946 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 73d45796737a Step 5/8 : USER 1001 ---> Using cache ---> 1a3751300d87 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> bcb2851b5627 Removing intermediate container 761ead3e8e0d Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in bade7a82936d ---> 37d5be5adc44 Removing intermediate container bade7a82936d Step 8/8 : LABEL "kubevirt-functional-tests-vagrant-dev0" '' "subresource-access-test" '' ---> Running in 8452b670b960 ---> 66a0efec7e41 Removing intermediate container 8452b670b960 Successfully built 66a0efec7e41 sending incremental file list created directory /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/cmd/winrmcli ./ Dockerfile sent 773 bytes received 34 bytes 1614.00 bytes/sec total size is 1098 speedup is 1.36 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0c81c3a7ddef Step 3/9 : ENV container docker ---> Using cache ---> d0b0dc01cb5d Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 5ed65400bde3 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> fa084e1bba91 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 701a1c98b779 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> d78f779f0759 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 7ba98a98fa7f Step 9/9 : LABEL "kubevirt-functional-tests-vagrant-dev0" '' "winrmcli" '' ---> Using cache ---> b84f33dbd1e8 Successfully built b84f33dbd1e8 hack/build-docker.sh push The push refers to a repository [localhost:33023/kubevirt/virt-controller] da0e823de6b4: Preparing 2d2b47e1e58b: Preparing 39bae602f753: Preparing 39bae602f753: Layer already exists 2d2b47e1e58b: Layer already exists da0e823de6b4: Pushed devel: digest: sha256:f1f214befa6f7cdc880958d4dc6902125477e4ca71b59c0531cf797f4f37ba31 size: 948 The push refers to a repository [localhost:33023/kubevirt/virt-launcher] 3e177177a8e0: Preparing b91463637b2f: Preparing b91463637b2f: Preparing ce9458211bd7: Preparing 0192ddcd3e03: Preparing f26bfed3f04b: Preparing d67865c92519: Preparing 569987a574d7: Preparing 9170be750ee2: Preparing 91a2c9242f65: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing a1359dc556dd: Preparing 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing d67865c92519: Waiting 569987a574d7: Waiting 530cc55618cd: Waiting 9170be750ee2: Waiting 91a2c9242f65: Waiting 4b440db36f72: Waiting a1359dc556dd: Waiting 39bae602f753: Waiting 490c7c373332: Waiting b91463637b2f: Layer already exists 3e177177a8e0: Layer already exists f26bfed3f04b: Pushed ce9458211bd7: Pushed 0192ddcd3e03: Pushed 569987a574d7: Layer already exists 9170be750ee2: Layer already exists 91a2c9242f65: Layer already exists 530cc55618cd: Layer already exists 490c7c373332: Layer already exists a1359dc556dd: Layer already exists 34fa414dfdf6: Layer already exists 4b440db36f72: Layer already exists 39bae602f753: Layer already exists d67865c92519: Pushed devel: digest: sha256:b0ab7cebdff5bbc735da4a3816011e96dc246ff9c0c97cbfc5733c4a05a09853 size: 3652 The push refers to a repository [localhost:33023/kubevirt/virt-handler] f7684ce8e942: Preparing 39bae602f753: Preparing 39bae602f753: Layer already exists f7684ce8e942: Pushed devel: digest: sha256:a524dab23df376a215379077f4b0f92c847b1b5169c48c3c6750afc8102d6362 size: 740 The push refers to a repository [localhost:33023/kubevirt/virt-api] 5f6fa3170852: Preparing fb0518caa616: Preparing 3b1bf9c72a92: Preparing 39bae602f753: Preparing fb0518caa616: Layer already exists 39bae602f753: Layer already exists 3b1bf9c72a92: Layer already exists 5f6fa3170852: Pushed devel: digest: sha256:d1a3274cc2fc7b6fc01ed9a46f07d0b2455570d2e8a342dab8fe916d45b681e0 size: 1159 The push refers to a repository [localhost:33023/kubevirt/iscsi-demo-target-tgtd] 2927410cd43a: Preparing b121fc13ece8: Preparing 18dd75eb79d2: Preparing 716441edb530: Preparing 39bae602f753: Preparing 2927410cd43a: Layer already exists 18dd75eb79d2: Layer already exists b121fc13ece8: Layer already exists 716441edb530: Layer already exists 39bae602f753: Layer already exists devel: digest: sha256:fa8ebe2d5799c7977a55d4772e271f600e96e99c1062466a1ab7319c345b5b3f size: 1368 The push refers to a repository [localhost:33023/kubevirt/vm-killer] de7d92f6c129: Preparing 39bae602f753: Preparing 39bae602f753: Layer already exists de7d92f6c129: Layer already exists devel: digest: sha256:ebd0d04c286534acfc48d5b53e2db622993e3a417512e678f2449befb44ba6c5 size: 740 The push refers to a repository [localhost:33023/kubevirt/registry-disk-v1alpha] cf42eba6bfe3: Preparing a87a1c350b94: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Layer already exists a87a1c350b94: Layer already exists cf42eba6bfe3: Layer already exists devel: digest: sha256:429b4382b1fbe4c314757a50abf5bc0b83d14c32bcb39a7053fe68cf07112df9 size: 948 The push refers to a repository [localhost:33023/kubevirt/cirros-registry-disk-demo] 256ac6227078: Preparing cf42eba6bfe3: Preparing a87a1c350b94: Preparing 6709b2da72b8: Preparing cf42eba6bfe3: Layer already exists 6709b2da72b8: Layer already exists a87a1c350b94: Layer already exists 256ac6227078: Layer already exists devel: digest: sha256:ba3b73fba9f880ad6243cfd9fd55fdc1fb03283dc27592728730584ea08f73ad size: 1160 The push refers to a repository [localhost:33023/kubevirt/fedora-cloud-registry-disk-demo] 935d63a8d20c: Preparing cf42eba6bfe3: Preparing a87a1c350b94: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Layer already exists cf42eba6bfe3: Layer already exists 935d63a8d20c: Layer already exists a87a1c350b94: Layer already exists devel: digest: sha256:7b914c9dbb19232758721df3194f872b30c921665aa8720f0119dad2a8db6957 size: 1161 The push refers to a repository [localhost:33023/kubevirt/alpine-registry-disk-demo] 08c2525b9dcd: Preparing cf42eba6bfe3: Preparing a87a1c350b94: Preparing 6709b2da72b8: Preparing a87a1c350b94: Layer already exists 6709b2da72b8: Layer already exists cf42eba6bfe3: Layer already exists 08c2525b9dcd: Layer already exists devel: digest: sha256:babd744cde2fb81ba28b184584bf77e91d76c253bfaef537239e041dcd5117dc size: 1160 The push refers to a repository [localhost:33023/kubevirt/subresource-access-test] 2583b8dbd30f: Preparing cddcea6287d1: Preparing 39bae602f753: Preparing 39bae602f753: Layer already exists cddcea6287d1: Layer already exists 2583b8dbd30f: Pushed devel: digest: sha256:c7cacafaef5a153e9de5a094cc924e44346531a027b1a5ff225b9ef9a243c931 size: 948 The push refers to a repository [localhost:33023/kubevirt/winrmcli] e247ea19f4a7: Preparing b9a86fe2f8d2: Preparing c8eb97d247eb: Preparing 39bae602f753: Preparing 39bae602f753: Layer already exists c8eb97d247eb: Layer already exists e247ea19f4a7: Layer already exists b9a86fe2f8d2: Layer already exists devel: digest: sha256:f9de3014339c8fdb1feb491cbe90fd2412ce88006798fe384352e45488117833 size: 1165 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt' 2018/04/10 06:54:23 Waiting for host: 192.168.66.101:22 2018/04/10 06:54:23 Connected to tcp://192.168.66.101:22 Trying to pull repository registry:5000/kubevirt/virt-controller ... devel: Pulling from registry:5000/kubevirt/virt-controller 2176639d844b: Pulling fs layer 15a3c1110beb: Pulling fs layer f4c05428e58f: Pulling fs layer 15a3c1110beb: Verifying Checksum 15a3c1110beb: Download complete f4c05428e58f: Verifying Checksum f4c05428e58f: Download complete 2176639d844b: Verifying Checksum 2176639d844b: Download complete 2176639d844b: Pull complete 15a3c1110beb: Pull complete f4c05428e58f: Pull complete Digest: sha256:f1f214befa6f7cdc880958d4dc6902125477e4ca71b59c0531cf797f4f37ba31 Trying to pull repository registry:5000/kubevirt/virt-launcher ... devel: Pulling from registry:5000/kubevirt/virt-launcher 2176639d844b: Already exists d7240bccd145: Pulling fs layer f2ef945504a7: Pulling fs layer a4b9e9eb807b: Pulling fs layer a1e80189bea5: Pulling fs layer 6cc174edcebf: Pulling fs layer 1475fa3276a7: Pulling fs layer cdad54148243: Pulling fs layer 42cab4ea66c5: Pulling fs layer 34276d2358e1: Pulling fs layer 1f3f3efdf95c: Pulling fs layer 76efcdf33787: Pulling fs layer 6c2239e8e566: Pulling fs layer 0b0c9d022f56: Pulling fs layer 4e38029166c7: Pulling fs layer a1e80189bea5: Waiting 6cc174edcebf: Waiting 1475fa3276a7: Waiting cdad54148243: Waiting 42cab4ea66c5: Waiting 34276d2358e1: Waiting 1f3f3efdf95c: Waiting 76efcdf33787: Waiting 6c2239e8e566: Waiting 0b0c9d022f56: Waiting 4e38029166c7: Waiting a4b9e9eb807b: Verifying Checksum a4b9e9eb807b: Download complete f2ef945504a7: Verifying Checksum f2ef945504a7: Download complete a1e80189bea5: Verifying Checksum a1e80189bea5: Download complete 6cc174edcebf: Verifying Checksum 6cc174edcebf: Download complete cdad54148243: Verifying Checksum cdad54148243: Download complete 42cab4ea66c5: Verifying Checksum 42cab4ea66c5: Download complete 34276d2358e1: Verifying Checksum 34276d2358e1: Download complete 1f3f3efdf95c: Verifying Checksum 1f3f3efdf95c: Download complete 1475fa3276a7: Verifying Checksum 1475fa3276a7: Download complete 6c2239e8e566: Verifying Checksum 6c2239e8e566: Download complete 76efcdf33787: Verifying Checksum 76efcdf33787: Download complete 0b0c9d022f56: Verifying Checksum 0b0c9d022f56: Download complete 4e38029166c7: Verifying Checksum 4e38029166c7: Download complete d7240bccd145: Verifying Checksum d7240bccd145: Download complete d7240bccd145: Pull complete f2ef945504a7: Pull complete a4b9e9eb807b: Pull complete a1e80189bea5: Pull complete 6cc174edcebf: Pull complete 1475fa3276a7: Pull complete cdad54148243: Pull complete 42cab4ea66c5: Pull complete 34276d2358e1: Pull complete 1f3f3efdf95c: Pull complete 76efcdf33787: Pull complete 6c2239e8e566: Pull complete 0b0c9d022f56: Pull complete 4e38029166c7: Pull complete Digest: sha256:b0ab7cebdff5bbc735da4a3816011e96dc246ff9c0c97cbfc5733c4a05a09853 Trying to pull repository registry:5000/kubevirt/virt-handler ... devel: Pulling from registry:5000/kubevirt/virt-handler 2176639d844b: Already exists a84b858e3e4b: Pulling fs layer a84b858e3e4b: Download complete a84b858e3e4b: Pull complete Digest: sha256:a524dab23df376a215379077f4b0f92c847b1b5169c48c3c6750afc8102d6362 Trying to pull repository registry:5000/kubevirt/virt-api ... devel: Pulling from registry:5000/kubevirt/virt-api 2176639d844b: Already exists ef1a3b124206: Pulling fs layer a2a214c1d089: Pulling fs layer 17cc6a3e0037: Pulling fs layer ef1a3b124206: Verifying Checksum ef1a3b124206: Download complete a2a214c1d089: Verifying Checksum a2a214c1d089: Download complete 17cc6a3e0037: Verifying Checksum 17cc6a3e0037: Download complete ef1a3b124206: Pull complete a2a214c1d089: Pull complete 17cc6a3e0037: Pull complete Digest: sha256:d1a3274cc2fc7b6fc01ed9a46f07d0b2455570d2e8a342dab8fe916d45b681e0 Trying to pull repository registry:5000/kubevirt/iscsi-demo-target-tgtd ... devel: Pulling from registry:5000/kubevirt/iscsi-demo-target-tgtd 2176639d844b: Already exists c81a49dbe8f3: Pulling fs layer 064679d2dcf5: Pulling fs layer bcaf437d3487: Pulling fs layer de7654dd0183: Pulling fs layer de7654dd0183: Waiting 064679d2dcf5: Verifying Checksum 064679d2dcf5: Download complete de7654dd0183: Verifying Checksum de7654dd0183: Download complete bcaf437d3487: Verifying Checksum bcaf437d3487: Download complete c81a49dbe8f3: Verifying Checksum c81a49dbe8f3: Download complete c81a49dbe8f3: Pull complete 064679d2dcf5: Pull complete bcaf437d3487: Pull complete de7654dd0183: Pull complete Digest: sha256:fa8ebe2d5799c7977a55d4772e271f600e96e99c1062466a1ab7319c345b5b3f Trying to pull repository registry:5000/kubevirt/vm-killer ... devel: Pulling from registry:5000/kubevirt/vm-killer 2176639d844b: Already exists 297a75761f45: Pulling fs layer 297a75761f45: Verifying Checksum 297a75761f45: Download complete 297a75761f45: Pull complete Digest: sha256:ebd0d04c286534acfc48d5b53e2db622993e3a417512e678f2449befb44ba6c5 Trying to pull repository registry:5000/kubevirt/registry-disk-v1alpha ... devel: Pulling from registry:5000/kubevirt/registry-disk-v1alpha 2115d46e7396: Pulling fs layer 82da67e25ceb: Pulling fs layer ddbdb5d1f9b9: Pulling fs layer ddbdb5d1f9b9: Verifying Checksum ddbdb5d1f9b9: Download complete 82da67e25ceb: Verifying Checksum 82da67e25ceb: Download complete 2115d46e7396: Download complete 2115d46e7396: Pull complete 82da67e25ceb: Pull complete ddbdb5d1f9b9: Pull complete Digest: sha256:429b4382b1fbe4c314757a50abf5bc0b83d14c32bcb39a7053fe68cf07112df9 Trying to pull repository registry:5000/kubevirt/cirros-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/cirros-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists 969a8f289e12: Pulling fs layer 969a8f289e12: Verifying Checksum 969a8f289e12: Download complete 969a8f289e12: Pull complete Digest: sha256:ba3b73fba9f880ad6243cfd9fd55fdc1fb03283dc27592728730584ea08f73ad Trying to pull repository registry:5000/kubevirt/fedora-cloud-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/fedora-cloud-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists f8db44c4ac4e: Pulling fs layer f8db44c4ac4e: Verifying Checksum f8db44c4ac4e: Download complete f8db44c4ac4e: Pull complete Digest: sha256:7b914c9dbb19232758721df3194f872b30c921665aa8720f0119dad2a8db6957 Trying to pull repository registry:5000/kubevirt/alpine-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/alpine-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists 36177419f209: Pulling fs layer 36177419f209: Verifying Checksum 36177419f209: Download complete 36177419f209: Pull complete Digest: sha256:babd744cde2fb81ba28b184584bf77e91d76c253bfaef537239e041dcd5117dc Trying to pull repository registry:5000/kubevirt/subresource-access-test ... devel: Pulling from registry:5000/kubevirt/subresource-access-test 2176639d844b: Already exists fdc994a640c8: Pulling fs layer 2e195d49b277: Pulling fs layer fdc994a640c8: Download complete fdc994a640c8: Pull complete 2e195d49b277: Verifying Checksum 2e195d49b277: Download complete 2e195d49b277: Pull complete Digest: sha256:c7cacafaef5a153e9de5a094cc924e44346531a027b1a5ff225b9ef9a243c931 Trying to pull repository registry:5000/kubevirt/winrmcli ... devel: Pulling from registry:5000/kubevirt/winrmcli 2176639d844b: Already exists 206ce0e712a7: Pulling fs layer b7c152dd4760: Pulling fs layer 7e51cf11cbfa: Pulling fs layer 7e51cf11cbfa: Verifying Checksum 7e51cf11cbfa: Download complete b7c152dd4760: Verifying Checksum b7c152dd4760: Download complete 206ce0e712a7: Verifying Checksum 206ce0e712a7: Download complete 206ce0e712a7: Pull complete b7c152dd4760: Pull complete 7e51cf11cbfa: Pull complete Digest: sha256:f9de3014339c8fdb1feb491cbe90fd2412ce88006798fe384352e45488117833 2018/04/10 06:56:31 Waiting for host: 192.168.66.101:22 2018/04/10 06:56:31 Connected to tcp://192.168.66.101:22 2018/04/10 06:56:37 Waiting for host: 192.168.66.102:22 2018/04/10 06:56:37 Connected to tcp://192.168.66.102:22 Trying to pull repository registry:5000/kubevirt/virt-controller ... devel: Pulling from registry:5000/kubevirt/virt-controller 2176639d844b: Pulling fs layer 15a3c1110beb: Pulling fs layer f4c05428e58f: Pulling fs layer 15a3c1110beb: Verifying Checksum 15a3c1110beb: Download complete f4c05428e58f: Verifying Checksum f4c05428e58f: Download complete 2176639d844b: Verifying Checksum 2176639d844b: Download complete 2176639d844b: Pull complete 15a3c1110beb: Pull complete f4c05428e58f: Pull complete Digest: sha256:f1f214befa6f7cdc880958d4dc6902125477e4ca71b59c0531cf797f4f37ba31 Trying to pull repository registry:5000/kubevirt/virt-launcher ... devel: Pulling from registry:5000/kubevirt/virt-launcher 2176639d844b: Already exists d7240bccd145: Pulling fs layer f2ef945504a7: Pulling fs layer a4b9e9eb807b: Pulling fs layer a1e80189bea5: Pulling fs layer 6cc174edcebf: Pulling fs layer 1475fa3276a7: Pulling fs layer cdad54148243: Pulling fs layer 42cab4ea66c5: Pulling fs layer 34276d2358e1: Pulling fs layer 1f3f3efdf95c: Pulling fs layer 76efcdf33787: Pulling fs layer 6c2239e8e566: Pulling fs layer 0b0c9d022f56: Pulling fs layer 4e38029166c7: Pulling fs layer a1e80189bea5: Waiting 6cc174edcebf: Waiting 1475fa3276a7: Waiting cdad54148243: Waiting 42cab4ea66c5: Waiting 34276d2358e1: Waiting 1f3f3efdf95c: Waiting 76efcdf33787: Waiting 6c2239e8e566: Waiting 0b0c9d022f56: Waiting 4e38029166c7: Waiting f2ef945504a7: Verifying Checksum f2ef945504a7: Download complete a1e80189bea5: Download complete a4b9e9eb807b: Download complete 6cc174edcebf: Verifying Checksum 6cc174edcebf: Download complete cdad54148243: Download complete 42cab4ea66c5: Verifying Checksum 42cab4ea66c5: Download complete 34276d2358e1: Verifying Checksum 34276d2358e1: Download complete 1f3f3efdf95c: Verifying Checksum 1f3f3efdf95c: Download complete 76efcdf33787: Verifying Checksum 76efcdf33787: Download complete 6c2239e8e566: Verifying Checksum 6c2239e8e566: Download complete 0b0c9d022f56: Verifying Checksum 0b0c9d022f56: Download complete 4e38029166c7: Verifying Checksum 4e38029166c7: Download complete 1475fa3276a7: Verifying Checksum 1475fa3276a7: Download complete d7240bccd145: Verifying Checksum d7240bccd145: Download complete d7240bccd145: Pull complete f2ef945504a7: Pull complete a4b9e9eb807b: Pull complete a1e80189bea5: Pull complete 6cc174edcebf: Pull complete 1475fa3276a7: Pull complete cdad54148243: Pull complete 42cab4ea66c5: Pull complete 34276d2358e1: Pull complete 1f3f3efdf95c: Pull complete 76efcdf33787: Pull complete 6c2239e8e566: Pull complete 0b0c9d022f56: Pull complete 4e38029166c7: Pull complete Digest: sha256:b0ab7cebdff5bbc735da4a3816011e96dc246ff9c0c97cbfc5733c4a05a09853 Trying to pull repository registry:5000/kubevirt/virt-handler ... devel: Pulling from registry:5000/kubevirt/virt-handler 2176639d844b: Already exists a84b858e3e4b: Pulling fs layer a84b858e3e4b: Verifying Checksum a84b858e3e4b: Download complete a84b858e3e4b: Pull complete Digest: sha256:a524dab23df376a215379077f4b0f92c847b1b5169c48c3c6750afc8102d6362 Trying to pull repository registry:5000/kubevirt/virt-api ... devel: Pulling from registry:5000/kubevirt/virt-api 2176639d844b: Already exists ef1a3b124206: Pulling fs layer a2a214c1d089: Pulling fs layer 17cc6a3e0037: Pulling fs layer a2a214c1d089: Verifying Checksum a2a214c1d089: Download complete ef1a3b124206: Verifying Checksum ef1a3b124206: Download complete 17cc6a3e0037: Verifying Checksum 17cc6a3e0037: Download complete ef1a3b124206: Pull complete a2a214c1d089: Pull complete 17cc6a3e0037: Pull complete Digest: sha256:d1a3274cc2fc7b6fc01ed9a46f07d0b2455570d2e8a342dab8fe916d45b681e0 Trying to pull repository registry:5000/kubevirt/iscsi-demo-target-tgtd ... devel: Pulling from registry:5000/kubevirt/iscsi-demo-target-tgtd 2176639d844b: Already exists c81a49dbe8f3: Pulling fs layer 064679d2dcf5: Pulling fs layer bcaf437d3487: Pulling fs layer de7654dd0183: Pulling fs layer de7654dd0183: Waiting 064679d2dcf5: Verifying Checksum 064679d2dcf5: Download complete de7654dd0183: Verifying Checksum de7654dd0183: Download complete bcaf437d3487: Verifying Checksum bcaf437d3487: Download complete c81a49dbe8f3: Verifying Checksum c81a49dbe8f3: Download complete c81a49dbe8f3: Pull complete 064679d2dcf5: Pull complete bcaf437d3487: Pull complete de7654dd0183: Pull complete Digest: sha256:fa8ebe2d5799c7977a55d4772e271f600e96e99c1062466a1ab7319c345b5b3f Trying to pull repository registry:5000/kubevirt/vm-killer ... devel: Pulling from registry:5000/kubevirt/vm-killer 2176639d844b: Already exists 297a75761f45: Pulling fs layer 297a75761f45: Download complete 297a75761f45: Pull complete Digest: sha256:ebd0d04c286534acfc48d5b53e2db622993e3a417512e678f2449befb44ba6c5 Trying to pull repository registry:5000/kubevirt/registry-disk-v1alpha ... devel: Pulling from registry:5000/kubevirt/registry-disk-v1alpha 2115d46e7396: Pulling fs layer 82da67e25ceb: Pulling fs layer ddbdb5d1f9b9: Pulling fs layer ddbdb5d1f9b9: Verifying Checksum ddbdb5d1f9b9: Download complete 82da67e25ceb: Verifying Checksum 82da67e25ceb: Download complete 2115d46e7396: Verifying Checksum 2115d46e7396: Download complete 2115d46e7396: Pull complete 82da67e25ceb: Pull complete ddbdb5d1f9b9: Pull complete Digest: sha256:429b4382b1fbe4c314757a50abf5bc0b83d14c32bcb39a7053fe68cf07112df9 Trying to pull repository registry:5000/kubevirt/cirros-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/cirros-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists 969a8f289e12: Pulling fs layer 969a8f289e12: Verifying Checksum 969a8f289e12: Download complete 969a8f289e12: Pull complete Digest: sha256:ba3b73fba9f880ad6243cfd9fd55fdc1fb03283dc27592728730584ea08f73ad Trying to pull repository registry:5000/kubevirt/fedora-cloud-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/fedora-cloud-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists f8db44c4ac4e: Pulling fs layer f8db44c4ac4e: Verifying Checksum f8db44c4ac4e: Download complete f8db44c4ac4e: Pull complete Digest: sha256:7b914c9dbb19232758721df3194f872b30c921665aa8720f0119dad2a8db6957 Trying to pull repository registry:5000/kubevirt/alpine-registry-disk-demo ... devel: Pulling from registry:5000/kubevirt/alpine-registry-disk-demo 2115d46e7396: Already exists 82da67e25ceb: Already exists ddbdb5d1f9b9: Already exists 36177419f209: Pulling fs layer 36177419f209: Download complete 36177419f209: Pull complete Digest: sha256:babd744cde2fb81ba28b184584bf77e91d76c253bfaef537239e041dcd5117dc Trying to pull repository registry:5000/kubevirt/subresource-access-test ... devel: Pulling from registry:5000/kubevirt/subresource-access-test 2176639d844b: Already exists fdc994a640c8: Pulling fs layer 2e195d49b277: Pulling fs layer fdc994a640c8: Verifying Checksum fdc994a640c8: Download complete 2e195d49b277: Verifying Checksum 2e195d49b277: Download complete fdc994a640c8: Pull complete 2e195d49b277: Pull complete Digest: sha256:c7cacafaef5a153e9de5a094cc924e44346531a027b1a5ff225b9ef9a243c931 Trying to pull repository registry:5000/kubevirt/winrmcli ... devel: Pulling from registry:5000/kubevirt/winrmcli 2176639d844b: Already exists 206ce0e712a7: Pulling fs layer b7c152dd4760: Pulling fs layer 7e51cf11cbfa: Pulling fs layer 7e51cf11cbfa: Verifying Checksum 7e51cf11cbfa: Download complete b7c152dd4760: Verifying Checksum b7c152dd4760: Download complete 206ce0e712a7: Verifying Checksum 206ce0e712a7: Download complete 206ce0e712a7: Pull complete b7c152dd4760: Pull complete 7e51cf11cbfa: Pull complete Digest: sha256:f9de3014339c8fdb1feb491cbe90fd2412ce88006798fe384352e45488117833 2018/04/10 06:58:33 Waiting for host: 192.168.66.102:22 2018/04/10 06:58:33 Connected to tcp://192.168.66.102:22 Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ PROVIDER=k8s-1.9.3 ++ provider_prefix=kubevirt-functional-tests-vagrant-dev0 ++ job_prefix=kubevirt-functional-tests-vagrant-dev0 + source cluster/k8s-1.9.3/provider.sh ++ set -e ++ image=k8s-1.9.3@sha256:f1506a8aebfb5b5fbf37cd8c9f060bc1f05db683fca15eb11f9fe9e9a58ec9e5 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/cli@sha256:b0023d1863338ef04fa0b8a8ee5956ae08616200d89ffd2e230668ea3deeaff4' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ PROVIDER=k8s-1.9.3 ++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.9.3.sh ++ source hack/config-provider-k8s-1.9.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubectl +++ docker_prefix=localhost:33023/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ cluster/k8s-1.9.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig ++ cluster/k8s-1.9.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ PROVIDER=k8s-1.9.3 ++ provider_prefix=kubevirt-functional-tests-vagrant-dev0 ++ job_prefix=kubevirt-functional-tests-vagrant-dev0 + source cluster/k8s-1.9.3/provider.sh ++ set -e ++ image=k8s-1.9.3@sha256:f1506a8aebfb5b5fbf37cd8c9f060bc1f05db683fca15eb11f9fe9e9a58ec9e5 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/cli@sha256:b0023d1863338ef04fa0b8a8ee5956ae08616200d89ffd2e230668ea3deeaff4' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ PROVIDER=k8s-1.9.3 ++ source hack/config-default.sh source hack/config-k8s-1.9.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/iscsi-demo-target-tgtd images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ kubeconfig=cluster/vagrant/.kubeconfig +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.9.3.sh ++ source hack/config-provider-k8s-1.9.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.9.3/.kubectl +++ docker_prefix=localhost:33023/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z vagrant-dev ]] + [[ vagrant-dev =~ .*-dev ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R customresourcedefinition "offlinevirtualmachines.kubevirt.io" created serviceaccount "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver" created clusterrolebinding "kubevirt-apiserver-auth-delegator" created rolebinding "kubevirt-apiserver" created role "kubevirt-apiserver" created clusterrole "kubevirt-apiserver" created clusterrole "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding "kubevirt-controller" created clusterrolebinding "kubevirt-controller-cluster-admin" created clusterrolebinding "kubevirt-privileged-cluster-admin" created customresourcedefinition "virtualmachinereplicasets.kubevirt.io" created service "virt-api" created deployment "virt-api" created service "virt-controller" created deployment "virt-controller" created daemonset "virt-handler" created customresourcedefinition "virtualmachines.kubevirt.io" created customresourcedefinition "virtualmachinepresets.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.9.3/.kubeconfig + cluster/k8s-1.9.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "iscsi-disk-alpine" created daemonset "iscsi-demo-target-tgtd" created serviceaccount "kubevirt-testing" created clusterrolebinding "kubevirt-testing-cluster-admin" created + '[' k8s-1.9.3 = vagrant-openshift ']' + '[' k8s-1.9.3 = os-3.9.0-alpha.4 ']' + echo Done Done ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n 'iscsi-demo-target-tgtd-5jcbk 0/1 ContainerCreating 0 1s iscsi-demo-target-tgtd-dtknb 0/1 ContainerCreating 0 1s virt-api-775dfb9789-pzgfk 0/1 ContainerCreating 0 2s virt-controller-5f7c946cc4-d4vtk 0/1 ContainerCreating 0 2s virt-controller-5f7c946cc4-dww5c 0/1 ContainerCreating 0 2s virt-handler-m8d8d 0/1 ContainerCreating 0 2s virt-handler-plfhr 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + grep -v Running + cluster/kubectl.sh get pods -n kube-system --no-headers iscsi-demo-target-tgtd-5jcbk 0/1 ContainerCreating 0 2s iscsi-demo-target-tgtd-dtknb 0/1 ContainerCreating 0 2s virt-api-775dfb9789-pzgfk 0/1 ContainerCreating 0 3s virt-controller-5f7c946cc4-d4vtk 0/1 ContainerCreating 0 3s virt-controller-5f7c946cc4-dww5c 0/1 ContainerCreating 0 3s virt-handler-m8d8d 0/1 ContainerCreating 0 3s virt-handler-plfhr 0/1 ContainerCreating 0 3s + sleep 10 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' + '[' -n 'false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb false virt-api-775dfb9789-pzgfk' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + awk '!/virt-controller/ && /false/' + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + awk '!/virt-controller/ && /false/' + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' + '[' -n 'false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-5jcbk false iscsi-demo-target-tgtd-dtknb + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-5jcbk' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + awk '!/virt-controller/ && /false/' + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-5jcbk + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-5jcbk' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-5jcbk + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n 'false iscsi-demo-target-tgtd-5jcbk' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + awk '!/virt-controller/ && /false/' + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers false iscsi-demo-target-tgtd-5jcbk + sleep 10 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ awk '!/virt-controller/ && /false/' ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers + '[' -n '' ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready,metadata:metadata.name' --no-headers ++ wc -l ++ awk '/virt-controller/ && /true/' + '[' 1 -lt 1 ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd-node01 1/1 Running 0 10m iscsi-demo-target-tgtd-5jcbk 1/1 Running 1 2m iscsi-demo-target-tgtd-dtknb 1/1 Running 1 2m kube-apiserver-node01 1/1 Running 0 9m kube-controller-manager-node01 1/1 Running 0 10m kube-dns-6f4fd4bdf-j2ckq 3/3 Running 0 10m kube-flannel-ds-blpdk 1/1 Running 0 10m kube-flannel-ds-wmmz7 1/1 Running 1 10m kube-proxy-5678h 1/1 Running 0 10m kube-proxy-5bgq6 1/1 Running 0 10m kube-scheduler-node01 1/1 Running 0 10m virt-api-775dfb9789-pzgfk 1/1 Running 0 2m virt-controller-5f7c946cc4-d4vtk 1/1 Running 0 2m virt-controller-5f7c946cc4-dww5c 0/1 Running 0 2m virt-handler-m8d8d 1/1 Running 0 2m virt-handler-plfhr 1/1 Running 0 2m + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-vagrant-dev/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:c9cd67dd05efdb07ffa24a86eeeca6419b8df7c6db7187fd750b9f83be5ac0b4 go version go1.9.2 linux/amd64 rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready rsync: read error: Connection reset by peer (104) rsync error: error in rsync protocol data stream (code 12) at io.c(764) [sender=3.0.9] Waiting for rsyncd to be ready skipping directory . go version go1.9.2 linux/amd64 Compiling tests... compiled tests.test 0890e086d8f09e54633df5d679a494146b35915694129762e8860969ab4d926b 0890e086d8f09e54633df5d679a494146b35915694129762e8860969ab4d926b hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1523343727 Will run 6 of 78 specs SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.016 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:58 should success to start a vm [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:132 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1054 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:58 should success to stop a running vm [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:138 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1054 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.006 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:58 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:149 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:191 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1054 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.006 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:58 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:149 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:207 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1054 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:58 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:225 should success to start a vm /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:241 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1054 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VM /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:58 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:225 should success to stop a vm /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:249 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1054 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS Waiting for namespace kubevirt-test-default to be removed, this can take a while ... Waiting for namespace kubevirt-test-alternative to be removed, this can take a while ... Ran 0 of 78 Specs in 9.091 seconds SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 78 Skipped PASS | FOCUSED make: *** [functest] Error 197 + make cluster-down ./cluster/down.sh 307ac91103f1 a0fb387f86c3 f9fad5935d5c 90f7a53d1ddb 307ac91103f1 a0fb387f86c3 f9fad5935d5c 90f7a53d1ddb kubevirt-functional-tests-vagrant-dev0-node01 kubevirt-functional-tests-vagrant-dev0-node02