+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/07/20 08:46:24 Waiting for host: 192.168.66.101:22 2018/07/20 08:46:27 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/20 08:46:39 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 27.505798 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:cd7d33b43242904e759cbde0e7177c054132ddc980148a2b0caeed344a2e98ab + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/20 08:47:21 Waiting for host: 192.168.66.102:22 2018/07/20 08:47:24 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/20 08:47:36 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 32s v1.10.3 node02 NotReady 9s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n 'node02 NotReady 9s v1.10.3' ']' + echo 'Waiting for all nodes to become ready ...' Waiting for all nodes to become ready ... + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 33s v1.10.3 node02 NotReady 10s v1.10.3 + kubectl_rc=0 + sleep 10 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 43s v1.10.3 node02 Ready 20s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:33233/kubevirt/virt-controller:devel Untagged: localhost:33233/kubevirt/virt-controller@sha256:c7ce3076980e5a8fa616e5bdb2ca80d7bd40341818cbcf893ddbac9465c8d5ee Deleted: sha256:cbcf256e0595d5efea893341e2e349f7533da664678962f8d94e1285644e84fc Deleted: sha256:68c7af21d470081538a1c191ef54f5feaf08e8acaa6b999a99fe0fb3a6ca8416 Deleted: sha256:fc4c52d0984622ad3ae80da28f76b844038ba5fa1c89f42d6e02dd0f085cd9ce Deleted: sha256:9b01c8ebe53193e3fd7e42d1f00ae77acfb56dc63fb817ff966a50974a5bae37 Untagged: localhost:33233/kubevirt/virt-launcher:devel Untagged: localhost:33233/kubevirt/virt-launcher@sha256:78b7c2b7b7f2701883b7d0009c0ac186f4363cb7aaad9ccec87f2ae335951f8e Deleted: sha256:06a8c943327726d92a1901cf1e155823fbbdec9d9aeb63d04e59265478e32e47 Deleted: sha256:0e2bccb42ff16ddb9013750b24b5e89b18f15bad3b0dea3b1ee4bd22b3996da5 Deleted: sha256:6438f2286c5ca4361d389ccdaf0ebe9cb56096caa41091e7e810cfcb1650bd19 Deleted: sha256:a6e9cc319472e4c453c677bc892b710845b1b54dbecbe61a99f1d4ff5218294f Deleted: sha256:7f76fa67f8acbae3969201823c741c9fa478af03b0e2598c038dd0bd7fd8d58f Deleted: sha256:df6f1f0db2844edf805a417247e10c8b9dd2277341d0e4b9bd7ec38a17b861f5 Deleted: sha256:0a37e03de2b2e9ec23da18c67dc48a2e023a5f4dba288f9a35603c742074c48d Deleted: sha256:3a7dfb13204d39dae14ddf3a8e36080f468336065471d5aa1de09bf5715d7538 Deleted: sha256:acee5a63adea6627b6ad33f68bcdd7cd99d3de4e6fff17631a2afd877e34ee2b Deleted: sha256:23be90eaecf1bf46562ae07e17ba12b93d6e546a80770358db31a7344b3b68ba Deleted: sha256:c24c5eb71f34cfe890e26211b87562727cf43409b4f5046a37911cf292f51d3f Deleted: sha256:c4b1c47dcb0d59f39d75d9208716a24504158bf2bd9818762e940b49367a8881 Untagged: localhost:33233/kubevirt/virt-handler:devel Untagged: localhost:33233/kubevirt/virt-handler@sha256:ad6625a1f762471e4c68fdea6f79e84619126e0dec49f5a1f14275849e0dd8c0 Deleted: sha256:6c25a7429355fa0a349babdbdf42cb4a5f3ebc30dc1da7ebb798e56b2d646279 Deleted: sha256:a0532bc8a5799776fe0e958ee6d8ab8e94d137edb1248ec0429126cf3934e9d8 Deleted: sha256:41d654c3ac172dc7ed030d4ac71c66b9b37069339d52b245bcb5c7ca47e84496 Deleted: sha256:87cb6a30d27ea32d88c9d060fed45aef666f98e1bd026246601a23cdbf144e2b Untagged: localhost:33233/kubevirt/virt-api:devel Untagged: localhost:33233/kubevirt/virt-api@sha256:cd6f93103ed7dbfcf92c6a60c04736e8b807c3d41f51ba04f5737d03695d6a6b Deleted: sha256:d3b21e8c32aa6f83baf53e3e365709c707bea6b4f2925c0574ed5e1cb4ebf724 Deleted: sha256:2acbd55ca5331753dbe056d1bf289e5913056eee420cb46c05cf34c902701651 Deleted: sha256:dcfc0acca59bf0c073fdab69a83875161fd0ae0da1b6c849b36d399c8f227b23 Deleted: sha256:b97f5b7b216c35215472137f54f7065e3d10572eef50c18207bc4b01e1150f7a Untagged: localhost:33233/kubevirt/subresource-access-test:devel Untagged: localhost:33233/kubevirt/subresource-access-test@sha256:40784aba948f3c944b8bdccfd6c048f8c66e670fa708d37437c5822776f5d176 Deleted: sha256:1f92cd4c3c752d5f5e7c17027668c489535daf9f2f29a63a3bb0254a6f146eee Deleted: sha256:bf826e3553af747333431708136ba8b89aed5be717b3c6b2e88b60b4d127385c Deleted: sha256:389dde52f062463db823166ed35db5c9f410d7a405cb44a8fdb4f14502f195c2 Deleted: sha256:df2ff950742f00e90d41a43177b29096817662bba08ea093e91bcba00d2b183e sha256:f9c79b1576e92cbd1766105e07c6f8f86d5dc58b8221d91f0c6f34fa7ab6e384 go version go1.10 linux/amd64 Waiting for rsyncd to be ready.. go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:f9c79b1576e92cbd1766105e07c6f8f86d5dc58b8221d91f0c6f34fa7ab6e384 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 370e26f0b41b Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 7479bf812c6d Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 9c0164f25023 Step 5/8 : USER 1001 ---> Using cache ---> 9819598bc369 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 00b9cb7795c9 Removing intermediate container c5bc58b4a1e4 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 976370b9f3d6 ---> 9f6896088c9a Removing intermediate container 976370b9f3d6 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-controller" '' ---> Running in bba3b0c02ca7 ---> d7af25d74dec Removing intermediate container bba3b0c02ca7 Successfully built d7af25d74dec Sending build context to Docker daemon 41.02 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> ca87e40ca247 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 51e40e038f39 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> 6a56a43b7ca9 Removing intermediate container 2774b7736dd0 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> f83fa6cc0e27 Removing intermediate container 2f79d5808153 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 8ddec18001af  ---> c06bd445dd12 Removing intermediate container 8ddec18001af Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 91547da8f229  ---> 01509320676a Removing intermediate container 91547da8f229 Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 8168876ababd Removing intermediate container 73c13f104e98 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in fdf7e5d8cdb1 ---> b39db977eac0 Removing intermediate container fdf7e5d8cdb1 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-launcher" '' ---> Running in b6504289e2e6 ---> ffeb629d1adc Removing intermediate container b6504289e2e6 Successfully built ffeb629d1adc Sending build context to Docker daemon 40.1 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 370e26f0b41b Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> fcce2c15c337 Removing intermediate container 396b9076b0a9 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 5a3b67cfb04c ---> 72a8821991ae Removing intermediate container 5a3b67cfb04c Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-handler" '' ---> Running in f8d1219c6a54 ---> 01cae4ee9600 Removing intermediate container f8d1219c6a54 Successfully built 01cae4ee9600 Sending build context to Docker daemon 37.02 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 370e26f0b41b Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> c8571b4c6d66 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 263d9976440b Step 5/8 : USER 1001 ---> Using cache ---> 1c190bbe81e5 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 9fe02b4a7c88 Removing intermediate container 07f7242d4f16 Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in d14f58a2197b ---> 3fcb3fe43673 Removing intermediate container d14f58a2197b Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-api" '' ---> Running in fd373d92f856 ---> 47d0f8502554 Removing intermediate container fd373d92f856 Successfully built 47d0f8502554 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 370e26f0b41b Step 3/7 : ENV container docker ---> Using cache ---> 128a79497287 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 684fb14c1c02 Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 4a6e6f8e4872 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> d83635922d32 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Running in 7fa6bb2d8989 ---> 9cc90474030a Removing intermediate container 7fa6bb2d8989 Successfully built 9cc90474030a Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 370e26f0b41b Step 3/5 : ENV container docker ---> Using cache ---> 128a79497287 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 2ab67c3b9ef6 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "vm-killer" '' ---> Using cache ---> cdbae8f12703 Successfully built cdbae8f12703 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 4817bb6590f8 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> b8b166db2544 Step 3/7 : ENV container docker ---> Using cache ---> 8b120f56086f Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 61851ac93c11 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> ada85930060d Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 6f2ffb0e7aed Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "registry-disk-v1alpha" '' ---> Using cache ---> efa644296ee3 Successfully built efa644296ee3 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33810/kubevirt/registry-disk-v1alpha:devel ---> efa644296ee3 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 5e1a1abaa85f Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> bbcefbaba7fd Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> 3ea40bdda93d Successfully built 3ea40bdda93d Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33810/kubevirt/registry-disk-v1alpha:devel ---> efa644296ee3 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 330dfb504750 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> b812afee4476 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> d551f51eae30 Successfully built d551f51eae30 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33810/kubevirt/registry-disk-v1alpha:devel ---> efa644296ee3 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 330dfb504750 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 34778c9e51ef Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> 21d43b8e87f3 Successfully built 21d43b8e87f3 Sending build context to Docker daemon 34.04 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 370e26f0b41b Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 9cf5a7be481a Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> be8cbc200c86 Step 5/8 : USER 1001 ---> Using cache ---> 14caf1cecdf2 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 5c2eb0405e2b Removing intermediate container b7c5ed473604 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in b0d48f646c52 ---> d9feb6d4b8e4 Removing intermediate container b0d48f646c52 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "subresource-access-test" '' ---> Running in 068c8e84553b ---> 02f40ddfd47d Removing intermediate container 068c8e84553b Successfully built 02f40ddfd47d Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 370e26f0b41b Step 3/9 : ENV container docker ---> Using cache ---> 128a79497287 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> e7945100b121 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> beac5a421d9f Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> f15eb6c97dea Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> c9c13dbe3b72 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> a603cdcbe0ed Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "winrmcli" '' ---> Using cache ---> 6c489fc279ca Successfully built 6c489fc279ca Sending build context to Docker daemon 35.17 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 6af39ea33818 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 809488d6ec81 Removing intermediate container 6da2a6fb613e Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in c0775cd64cf1 ---> 7f62f367be08 Removing intermediate container c0775cd64cf1 Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Running in cdf2098bb7e9 ---> 8b12c19d546d Removing intermediate container cdf2098bb7e9 Successfully built 8b12c19d546d hack/build-docker.sh push The push refers to a repository [localhost:33810/kubevirt/virt-controller] 9d620dcb5e5f: Preparing c4271fcd57fb: Preparing 891e1e4ef82a: Preparing c4271fcd57fb: Pushed 9d620dcb5e5f: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:f521559ca29a0d8e42298d4d08891132724a2a3dcfa73bd40d3b57b3361f6fe5 size: 949 The push refers to a repository [localhost:33810/kubevirt/virt-launcher] f5b36d523ddb: Preparing f093808b9e17: Preparing bc825a4b1861: Preparing 9f56bb02dfad: Preparing 1a3ef439a6ba: Preparing faacd7b80896: Preparing f5b36d523ddb: Waiting f093808b9e17: Waiting bc825a4b1861: Waiting 9f56bb02dfad: Waiting 1a3ef439a6ba: Waiting da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing faacd7b80896: Waiting da38cf808aa5: Waiting b83399358a92: Waiting 186d8b3e4fd8: Waiting 5eefb9960a36: Preparing 891e1e4ef82a: Preparing fa6154170bf5: Waiting 5eefb9960a36: Waiting 891e1e4ef82a: Waiting f5b36d523ddb: Pushed f093808b9e17: Pushed 9f56bb02dfad: Pushed bc825a4b1861: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 1a3ef439a6ba: Pushed faacd7b80896: Pushed 891e1e4ef82a: Pushed 5eefb9960a36: Pushed devel: digest: sha256:b1fdff919d88f459568eb16f49057bf2281dbe9574baaa952ccb446244ed48b0 size: 2828 The push refers to a repository [localhost:33810/kubevirt/virt-handler] 98fc14c41ffc: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher 98fc14c41ffc: Pushed devel: digest: sha256:e047f64f3adab30207addad9714c765426906328e3c4b4adced983eed33c0222 size: 741 The push refers to a repository [localhost:33810/kubevirt/virt-api] c2d30b3d1f67: Preparing 497591814b24: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-handler 497591814b24: Pushed c2d30b3d1f67: Pushed devel: digest: sha256:430af04ccc7b0a860549e4314a4e8c14651355fb7f6aa59347e48d4d11b954c1 size: 948 The push refers to a repository [localhost:33810/kubevirt/disks-images-provider] 9ec886f5e1a8: Preparing 0f50e0cc50a7: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 9ec886f5e1a8: Retrying in 5 seconds 9ec886f5e1a8: Retrying in 4 seconds 9ec886f5e1a8: Retrying in 3 seconds 9ec886f5e1a8: Retrying in 2 seconds 9ec886f5e1a8: Retrying in 1 second 0f50e0cc50a7: Pushed 9ec886f5e1a8: Pushed devel: digest: sha256:797e5a6f24fe38a22666b772ed881597fb99d31007b3dc911dfb4b04392e36c9 size: 948 The push refers to a repository [localhost:33810/kubevirt/vm-killer] cd6cc14a931e: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider cd6cc14a931e: Pushed devel: digest: sha256:77879dd120842043cb44a199d87b0f5f5e4b04b8a676d33d035af35bd9b46e57 size: 740 The push refers to a repository [localhost:33810/kubevirt/registry-disk-v1alpha] 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 376d512574a4: Pushed 7971c2f81ae9: Pushed e7752b410e4c: Pushed devel: digest: sha256:ef40e1d7c64d8fbf1d851c949124decd700b718b6dfcd8f8a84abbd0e9b4a619 size: 948 The push refers to a repository [localhost:33810/kubevirt/cirros-registry-disk-demo] b32252f12c76: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing e7752b410e4c: Mounted from kubevirt/registry-disk-v1alpha 7971c2f81ae9: Mounted from kubevirt/registry-disk-v1alpha 376d512574a4: Mounted from kubevirt/registry-disk-v1alpha b32252f12c76: Pushed devel: digest: sha256:cc97dea83d7287cfcc53d9430c359e4ffdb7b30c3dfb01d3486a2739ece5462a size: 1160 The push refers to a repository [localhost:33810/kubevirt/fedora-cloud-registry-disk-demo] 071623453e45: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 7971c2f81ae9: Mounted from kubevirt/cirros-registry-disk-demo 376d512574a4: Mounted from kubevirt/cirros-registry-disk-demo e7752b410e4c: Mounted from kubevirt/cirros-registry-disk-demo 071623453e45: Pushed devel: digest: sha256:54ab8edd6fba1c0e7fe04cca651614af5440e9fc3afc65aceca8b603897e8a1c size: 1161 The push refers to a repository [localhost:33810/kubevirt/alpine-registry-disk-demo] 574ec1e21826: Preparing 376d512574a4: Preparing 7971c2f81ae9: Preparing e7752b410e4c: Preparing 7971c2f81ae9: Waiting e7752b410e4c: Waiting 376d512574a4: Waiting 376d512574a4: Mounted from kubevirt/fedora-cloud-registry-disk-demo e7752b410e4c: Mounted from kubevirt/fedora-cloud-registry-disk-demo 7971c2f81ae9: Mounted from kubevirt/fedora-cloud-registry-disk-demo 574ec1e21826: Pushed devel: digest: sha256:83a7c71b87be6f066bf59b606f40f1f2c707845e707499ab6f0111cd4b3a3ab3 size: 1160 The push refers to a repository [localhost:33810/kubevirt/subresource-access-test] 03131bd6b472: Preparing 18baac342a78: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 18baac342a78: Pushed 03131bd6b472: Pushed devel: digest: sha256:8f7d9d2847d525b07a92765498698eeb1a34cd1e46f9b1157f7cd2093a2ae72e size: 948 The push refers to a repository [localhost:33810/kubevirt/winrmcli] 8de2e8bac8c0: Preparing 27a9c4f4a72d: Preparing 6f4d8b7f38e8: Preparing 891e1e4ef82a: Preparing 27a9c4f4a72d: Waiting 6f4d8b7f38e8: Waiting 891e1e4ef82a: Waiting 8de2e8bac8c0: Pushed 891e1e4ef82a: Mounted from kubevirt/subresource-access-test 6f4d8b7f38e8: Pushed 27a9c4f4a72d: Pushed devel: digest: sha256:eafa3054c256623860ccd435f7b8911f350d39bbeb05e00a32dc42e017ab6d31 size: 1165 The push refers to a repository [localhost:33810/kubevirt/example-hook-sidecar] 7c84e0a6d1f3: Preparing 39bae602f753: Preparing 7c84e0a6d1f3: Pushed 39bae602f753: Pushed devel: digest: sha256:a2063850dc39ec50a4f7fc37852d4a54fa22a542e131546364e993f171ccc376 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-87-g5309756 ++ KUBEVIRT_VERSION=v0.7.0-87-g5309756 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33810/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + read p + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-87-g5309756 ++ KUBEVIRT_VERSION=v0.7.0-87-g5309756 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33810/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-release ]] + [[ k8s-1.10.3-release =~ .*-dev ]] + [[ k8s-1.10.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created service "virt-api" created deployment.extensions "virt-api" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79764579-9scnj 0/1 ContainerCreating 0 3s virt-api-7d79764579-l9xdz 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-2mbp2 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-444ln 0/1 ContainerCreating 0 3s virt-handler-95gpm 0/1 ContainerCreating 0 3s virt-handler-xz27w 0/1 ContainerCreating 0 3s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + grep -v Running + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers disks-images-provider-hj5rp 0/1 ContainerCreating 0 2s disks-images-provider-nlzqm 0/1 ContainerCreating 0 2s virt-api-7d79764579-9scnj 0/1 ContainerCreating 0 4s virt-api-7d79764579-l9xdz 0/1 ContainerCreating 0 4s virt-controller-7d57d96b65-2mbp2 0/1 ContainerCreating 0 4s virt-controller-7d57d96b65-444ln 0/1 ContainerCreating 0 4s virt-handler-95gpm 0/1 ContainerCreating 0 4s virt-handler-xz27w 0/1 ContainerCreating 0 4s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n 'disks-images-provider-nlzqm 0/1 ContainerCreating 0 39s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running + true + sleep 30 + current_time=60 + '[' 60 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ grep false ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-hj5rp 1/1 Running 0 1m disks-images-provider-nlzqm 1/1 Running 0 1m etcd-node01 1/1 Running 0 11m kube-apiserver-node01 1/1 Running 0 11m kube-controller-manager-node01 1/1 Running 0 11m kube-dns-86f4d74b45-kqdxd 3/3 Running 0 12m kube-flannel-ds-c2d8m 1/1 Running 0 12m kube-flannel-ds-n48vd 1/1 Running 0 12m kube-proxy-d88s6 1/1 Running 0 12m kube-proxy-ntl8t 1/1 Running 0 12m kube-scheduler-node01 1/1 Running 0 11m virt-api-7d79764579-9scnj 1/1 Running 0 1m virt-api-7d79764579-l9xdz 1/1 Running 1 1m virt-controller-7d57d96b65-2mbp2 1/1 Running 0 1m virt-controller-7d57d96b65-444ln 1/1 Running 0 1m virt-handler-95gpm 1/1 Running 0 1m virt-handler-xz27w 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n default --no-headers No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:f9c79b1576e92cbd1766105e07c6f8f86d5dc58b8221d91f0c6f34fa7ab6e384 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532077498 Will run 143 of 143 specs Panic [33.343 seconds] [BeforeSuite] BeforeSuite /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:46 Test Panicked Timeout: request did not complete within allowed duration /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/panic.go:505 Full Stack Trace /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/panic.go:505 +0x229 kubevirt.io/kubevirt/tests.PanicOnError(0x141b580, 0xc4201225a0) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:757 +0x4a kubevirt.io/kubevirt/tests.createNamespaces() /root/go/src/kubevirt.io/kubevirt/tests/utils.go:750 +0x174 kubevirt.io/kubevirt/tests.BeforeTestSuitSetup() /root/go/src/kubevirt.io/kubevirt/tests/utils.go:331 +0xf2 kubevirt.io/kubevirt/tests_test.glob..func10() /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:47 +0x20 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc4204144e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc42035b3b0, 0x1397f30) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:824 +0x2e0 ------------------------------ Panic [31.753 seconds] [AfterSuite] AfterSuite /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:50 Test Panicked Timeout: request did not complete within allowed duration /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/panic.go:505 Full Stack Trace /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/panic.go:505 +0x229 kubevirt.io/kubevirt/tests.PanicOnError(0x141b580, 0xc420122630) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:757 +0x4a kubevirt.io/kubevirt/tests.createNamespaces() /root/go/src/kubevirt.io/kubevirt/tests/utils.go:750 +0x174 kubevirt.io/kubevirt/tests.AfterTestSuitCleanup() /root/go/src/kubevirt.io/kubevirt/tests/utils.go:310 +0x22 kubevirt.io/kubevirt/tests_test.glob..func11() /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:51 +0x20 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc420414600, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc42035b3b0, 0x1397f30) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:824 +0x2e0 ------------------------------ Ran 143 of 0 Specs in 65.097 seconds FAIL! -- 0 Passed | 143 Failed | 0 Pending | 0 Skipped --- FAIL: TestTests (65.10s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh