+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev + [[ k8s-1.10.3-dev =~ openshift-.* ]] + [[ k8s-1.10.3-dev =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/06/28 20:48:04 Waiting for host: 192.168.66.101:22 2018/06/28 20:48:07 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/28 20:48:19 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 25.003398 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:6848e6131c4c42b9a2773d736f2eb9bd8da90c88153013d20ef5b6fd474d4001 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/06/28 20:48:59 Waiting for host: 192.168.66.102:22 2018/06/28 20:49:02 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/28 20:49:14 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5454 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 36s v1.10.3 node02 Ready 11s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers ++ grep NotReady + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 36s v1.10.3 node02 Ready 11s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:32988/kubevirt/virt-controller:devel Untagged: localhost:32988/kubevirt/virt-controller@sha256:72e4fdeeecad189464be52859d85d1e680a1e8285857f732e992109783efba9f Deleted: sha256:27e8d2a1eacd348fdd525638cbff650e36e1b68cf7bd7f8c89378f5a673f9f68 Deleted: sha256:68bbad0dcba786184769364dace64c3bc226e9d671dc280d6b6048707de24799 Deleted: sha256:15715936c053d7672c6d932e7c67cd718fa5e02ee8884a9832dd91f293668b45 Deleted: sha256:d575ef9a4fba12bf8e26bd62a2875048b765f7f05d71a1534661c257c12c7282 Untagged: localhost:32988/kubevirt/virt-launcher:devel Untagged: localhost:32988/kubevirt/virt-launcher@sha256:cf889f3bfb106bc720f54da4ca75adbe5ddeee1a0a20756a0b6c9ccd0920dc42 Deleted: sha256:477515e160d4fc45f26fe3beb699a4b8119d6f476f4c993e1ef02fa6efb089f7 Deleted: sha256:21b02bcbd74142204dbb0b96367c213c07cf35254f69c34efd853015d3f30cd3 Deleted: sha256:8c14962ceca17f44beaef6054c9af05bb4f64d57b76bb78dd2059250163f8a6e Deleted: sha256:40725e5b6cf2d459d6eb2726501eedf24598d83ec96feea7873fdfa50592a18e Deleted: sha256:785334ac32d6557b0235ab327845d2ab787e6dc98c428d758dcb805fed260623 Deleted: sha256:9d5c7042c1606bcdf75ca6987e6cf4553d0c1652177095c21c5d6f8b6c9aa378 Deleted: sha256:6e76939d460c57550dd7e3d4013b81ca3770117ef077cd021cc80fd0cc723965 Deleted: sha256:6f092755508aac51100c9bb37adbc059d88cb0c1d9b9ee5cca00ae231265ba2a Deleted: sha256:be137b397ea03536ada42c171624239cb5fcdd2dfc78d100c19a5cd01b6a61ba Deleted: sha256:2756ee25a06dd02cef89221818b6f606a4f45f8f1566a1417a38ceed13974673 Deleted: sha256:9cc3088fed6bee22c994b5c848805a3bd77d19bbd7be960457bbf7adc82707a3 Deleted: sha256:de01c7b46d762f227642270152fa3484797e131d85ab0e35e9ae86f8c26b1688 Untagged: localhost:32988/kubevirt/virt-handler:devel Untagged: localhost:32988/kubevirt/virt-handler@sha256:879520270e226214f4c7efeedf1566d7db1b07e25a8ceaadfc37ca9083cc9a03 Deleted: sha256:4cfa2d95b26b0ecdcb8c4d954d208753e75575c5e7f62b12c1630bff4d47c9c6 Deleted: sha256:a62e2efae2953d6b46e8c836c7c1c882fcd13af660bc6589d591f4a6a655d112 Deleted: sha256:01f408f25b80847b49a3497cd5daf59369b311a0887eaf74919fe0ce411a9dab Deleted: sha256:9d9f0d6c008908cb64f4c6ec9685fb93ab6ed48178ec2fba9a9bac4d4cc1d2a9 Untagged: localhost:32988/kubevirt/virt-api:devel Untagged: localhost:32988/kubevirt/virt-api@sha256:d55929da7e47f31f5d52af43f2932ecb55be8cb3147a0ff1aa2c4317852a088c Deleted: sha256:df19d88ff93a691e179ea969fa95af38a6547d56e24e6b23b5a10cb4deeea5c6 Deleted: sha256:db5a55b3c397353682a6b1337e25c986930e045004f4bda2214735c96d09f5e3 Deleted: sha256:293f3a2a2e85c496c318b6200bb1ec149a9e4ad2c1cfcb28387a5983d6f93e22 Deleted: sha256:8dd7d69880fc65d0319f32cf680270ae3052a38b722f6069546132c0071498ea Untagged: localhost:32988/kubevirt/subresource-access-test:devel Untagged: localhost:32988/kubevirt/subresource-access-test@sha256:cabed580e9b1de07f3b029b08b7e96d3e7dbf0a7b7fef039283d7ea7dc74a080 Deleted: sha256:c2ac53a6305deb459d14fec83e282a88e74ce891808255b72ec58da416abadb7 Deleted: sha256:628596bca0b821b0f78af9f0ca32c3e60083724a562beb3b61bad775f60e33ba Deleted: sha256:84d14a5c2f69830d4848222e5bc26580e3b752336ff11a3872100f821c300dd3 Deleted: sha256:1e9d087bbfb537fbdaeb7cc401fef66671895a251238255762af595f88b9cc8c sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 37.27 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 5c7d576d7c73 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 83ec280c04c4 Step 5/8 : USER 1001 ---> Using cache ---> 92b648073fa2 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> 1dba71556064 Removing intermediate container 0fca867c59dc Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 96687d1a6aab ---> f0298782be28 Removing intermediate container 96687d1a6aab Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-controller" '' ---> Running in 4804d9cc0cb2 ---> bd3abbb9d362 Removing intermediate container 4804d9cc0cb2 Successfully built bd3abbb9d362 Sending build context to Docker daemon 39.09 MB Step 1/10 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0b7dc10e33a1 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> c3422738d80a Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> ce279c01bb93 Removing intermediate container 2be7424e03a2 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 49bb23359f01 Removing intermediate container 6e5600ec16d9 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in cd8db55b153b  ---> 664eaa87c4a7 Removing intermediate container cd8db55b153b Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in a25dcfb28431  ---> e9b91a77c8d4 Removing intermediate container a25dcfb28431 Step 8/10 : COPY entrypoint.sh libvirtd.sh sh.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> bf5442903e07 Removing intermediate container a8ff9b1ff569 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in f6c8a1c509bc ---> dbae78818db4 Removing intermediate container f6c8a1c509bc Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-launcher" '' ---> Running in 8b1ef902a63a ---> f77f311b8c42 Removing intermediate container 8b1ef902a63a Successfully built f77f311b8c42 Sending build context to Docker daemon 40.64 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 0517095fc66f Removing intermediate container 8af16ec16c6c Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 1114d93d9754 ---> 2b6abb8d40f8 Removing intermediate container 1114d93d9754 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-handler" '' ---> Running in b571cdb4146e ---> b28e43a51b18 Removing intermediate container b571cdb4146e Successfully built b28e43a51b18 Sending build context to Docker daemon 38.09 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 1ee495c45665 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> d5d529a63aa5 Step 5/8 : USER 1001 ---> Using cache ---> b8cd6b01e5a1 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> f08c176b2364 Removing intermediate container 5f54e628ce0a Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 72f37f3ddc79 ---> 802c62bd1120 Removing intermediate container 72f37f3ddc79 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "virt-api" '' ---> Running in c8597576c3fc ---> a8773ca8f6bb Removing intermediate container c8597576c3fc Successfully built a8773ca8f6bb Sending build context to Docker daemon 4.608 kB Step 1/7 : FROM fedora:27 ---> 9110ae7f579f Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/7 : ENV container docker ---> Using cache ---> cc783cf25db1 Step 4/7 : RUN mkdir -p /images/custom /images/alpine /images/datavolume1 /images/datavolume2 /images/datavolume3 && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Running in 5b40f9812b57  % Total % Received % Xferd Average Speed Time Time Time Current  Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 37.0M 0 14160 0 0 14160 0 0:45:39 --:--:-- 0:45:39 16837 9 37.0M 9 3771k 0 0 3771k 0 0:00:10 0:00:01 0:00:09 2089k 34 37.0M 34 12.7M 0 0 6535k 0 0:00:05  0:00:02 0:00:03 4659k 73 37.0M 73 27.0M 0 0 9245k 0 0:00:04 0:00:03 0:00:01 7289k 100 37.0M 100 37.0M 0 0 9472k 0 0:00:04 0:00:04 --:--:-- 8813k  ---> ea34e549da54 Removing intermediate container 5b40f9812b57 Step 5/7 : ADD entrypoint.sh / ---> 21052dc47a25 Removing intermediate container 4ae3a3d0e7df Step 6/7 : CMD /entrypoint.sh ---> Running in a5338bbd8f73 ---> 22d8de1795d6 Removing intermediate container a5338bbd8f73 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Running in 3bc2e875b424 ---> 96199799d1ef Removing intermediate container 3bc2e875b424 Successfully built 96199799d1ef Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/5 : ENV container docker ---> Using cache ---> cc783cf25db1 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> f43092ff797b Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "vm-killer" '' ---> Using cache ---> f3f2864bd2f9 Successfully built f3f2864bd2f9 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> bcec0ae8107e Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> eb2ecba9d79d Step 3/7 : ENV container docker ---> Using cache ---> 7c8d23462894 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 1121e08529fa Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 1e9b22eccc69 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 918eb49e60d7 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "registry-disk-v1alpha" '' ---> Using cache ---> 0855e1107edc Successfully built 0855e1107edc Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33191/kubevirt/registry-disk-v1alpha:devel ---> 0855e1107edc Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 89899bef7e40 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 2c19b8793f59 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> aa516c92a40a Successfully built aa516c92a40a Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33191/kubevirt/registry-disk-v1alpha:devel ---> 0855e1107edc Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 9b4d2baaf87d Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 2574a5fc4606 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> f2a38ad9585d Successfully built f2a38ad9585d Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33191/kubevirt/registry-disk-v1alpha:devel ---> 0855e1107edc Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 9b4d2baaf87d Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> f3816bbb2d22 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Using cache ---> e441caf51660 Successfully built e441caf51660 Sending build context to Docker daemon 34.91 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> a93c2ef4d06c Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> b3278975ff14 Step 5/8 : USER 1001 ---> Using cache ---> 7b9c3f06521e Step 6/8 : COPY subresource-access-test /subresource-access-test ---> c450e2e3a7a3 Removing intermediate container 8c64e0ca5ccc Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in e662ce923f68 ---> 26196910c864 Removing intermediate container e662ce923f68 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "subresource-access-test" '' ---> Running in 6cc2178f764e ---> 58c29936c945 Removing intermediate container 6cc2178f764e Successfully built 58c29936c945 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/9 : ENV container docker ---> Using cache ---> cc783cf25db1 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 1f969d60dcdb Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> ec50d6cdb417 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 481568cf019c Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 8d12f44cea40 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 5f29a8914a5a Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-dev0" '' "winrmcli" '' ---> Using cache ---> 801961a71eba Successfully built 801961a71eba Sending build context to Docker daemon 4.096 kB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/8 : ENV container docker ---> Using cache ---> cc783cf25db1 Step 4/8 : RUN dnf install -y nginx && dnf -y clean all ---> Running in 6ce2b5226e21 Fedora 27 - x86_64 - Updates 1.4 MB/s | 24 MB 00:17 Fedora 27 - x86_64 741 kB/s | 58 MB 01:20 Last metadata expiration check: 0:00:24 ago on Thu Jun 28 20:54:31 2018. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: nginx x86_64 1:1.12.1-1.fc27 fedora 537 k Upgrading: openssl-libs x86_64 1:1.1.0h-3.fc27 updates 1.3 M Installing dependencies: gc x86_64 7.6.0-7.fc27 fedora 110 k gperftools-libs x86_64 2.6.1-5.fc27 updates 288 k guile x86_64 5:2.0.14-3.fc27 fedora 3.5 M libatomic_ops x86_64 7.4.6-3.fc27 fedora 33 k libstdc++ x86_64 7.3.1-5.fc27 updates 482 k libtool-ltdl x86_64 2.4.6-20.fc27 fedora 55 k libunwind x86_64 1.2.1-3.fc27 updates 67 k make x86_64 1:4.2.1-4.fc27 fedora 494 k nginx-filesystem noarch 1:1.12.1-1.fc27 fedora 20 k nginx-mimetypes noarch 2.1.48-2.fc27 fedora 26 k openssl x86_64 1:1.1.0h-3.fc27 updates 575 k Transaction Summary ================================================================================ Install 12 Packages Upgrade 1 Package Total download size: 7.4 M Downloading Packages: (1/13): nginx-mimetypes-2.1.48-2.fc27.noarch.rp 33 kB/s | 26 kB 00:00 (2/13): nginx-filesystem-1.12.1-1.fc27.noarch.r 24 kB/s | 20 kB 00:00 (3/13): nginx-1.12.1-1.fc27.x86_64.rpm 392 kB/s | 537 kB 00:01 (4/13): make-4.2.1-4.fc27.x86_64.rpm 677 kB/s | 494 kB 00:00 (5/13): gc-7.6.0-7.fc27.x86_64.rpm 541 kB/s | 110 kB 00:00 (6/13): openssl-1.1.0h-3.fc27.x86_64.rpm 642 kB/s | 575 kB 00:00 (7/13): libatomic_ops-7.4.6-3.fc27.x86_64.rpm 103 kB/s | 33 kB 00:00 (8/13): libtool-ltdl-2.4.6-20.fc27.x86_64.rpm 163 kB/s | 55 kB 00:00 (9/13): gperftools-libs-2.6.1-5.fc27.x86_64.rpm 735 kB/s | 288 kB 00:00 (10/13): libunwind-1.2.1-3.fc27.x86_64.rpm 735 kB/s | 67 kB 00:00 (11/13): guile-2.0.14-3.fc27.x86_64.rpm 3.2 MB/s | 3.5 MB 00:01 (12/13): libstdc++-7.3.1-5.fc27.x86_64.rpm 543 kB/s | 482 kB 00:00 (13/13): openssl-libs-1.1.0h-3.fc27.x86_64.rpm 1.6 MB/s | 1.3 MB 00:00 -------------------------------------------------------------------------------- Total 1.8 MB/s | 7.4 MB 00:04 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Upgrading : openssl-libs-1:1.1.0h-3.fc27.x86_64 1/14 Running scriptlet: openssl-libs-1:1.1.0h-3.fc27.x86_64 1/14 Installing : libstdc++-7.3.1-5.fc27.x86_64 2/14 Running scriptlet: libstdc++-7.3.1-5.fc27.x86_64 2/14 Running scriptlet: nginx-filesystem-1:1.12.1-1.fc27.noarch 3/14 Installing : nginx-filesystem-1:1.12.1-1.fc27.noarch 3/14 Installing : nginx-mimetypes-2.1.48-2.fc27.noarch 4/14 Installing : libunwind-1.2.1-3.fc27.x86_64 5/14 Running scriptlet: libunwind-1.2.1-3.fc27.x86_64 5/14 Installing : gperftools-libs-2.6.1-5.fc27.x86_64 6/14 Running scriptlet: gperftools-libs-2.6.1-5.fc27.x86_64 6/14 Installing : libtool-ltdl-2.4.6-20.fc27.x86_64 7/14 Running scriptlet: libtool-ltdl-2.4.6-20.fc27.x86_64 7/14 Installing : libatomic_ops-7.4.6-3.fc27.x86_64 8/14 Running scriptlet: libatomic_ops-7.4.6-3.fc27.x86_64 8/14 Installing : gc-7.6.0-7.fc27.x86_64 9/14 Running scriptlet: gc-7.6.0-7.fc27.x86_64 9/14 Installing : guile-5:2.0.14-3.fc27.x86_64 10/14 Running scriptlet: guile-5:2.0.14-3.fc27.x86_64 10/14 Installing : make-1:4.2.1-4.fc27.x86_64 11/14 Running scriptlet: make-1:4.2.1-4.fc27.x86_64 11/14 Installing : openssl-1:1.1.0h-3.fc27.x86_64 12/14 Installing : nginx-1:1.12.1-1.fc27.x86_64 13/14 Running scriptlet: nginx-1:1.12.1-1.fc27.x86_64 13/14 Cleanup : openssl-libs-1:1.1.0g-1.fc27.x86_64 14/14 Running scriptlet: openssl-libs-1:1.1.0g-1.fc27.x86_64 14/14 Running scriptlet: guile-5:2.0.14-3.fc27.x86_64 14/14 Running scriptlet: openssl-libs-1:1.1.0g-1.fc27.x86_64 14/14Failed to connect to bus: No such file or directory  Verifying : nginx-1:1.12.1-1.fc27.x86_64 1/14 Verifying : nginx-filesystem-1:1.12.1-1.fc27.noarch 2/14 Verifying : nginx-mimetypes-2.1.48-2.fc27.noarch 3/14 Verifying : openssl-1:1.1.0h-3.fc27.x86_64 4/14 Verifying : make-1:4.2.1-4.fc27.x86_64 5/14 Verifying : gc-7.6.0-7.fc27.x86_64 6/14 Verifying : guile-5:2.0.14-3.fc27.x86_64 7/14 Verifying : libatomic_ops-7.4.6-3.fc27.x86_64 8/14 Verifying : libtool-ltdl-2.4.6-20.fc27.x86_64 9/14 Verifying : gperftools-libs-2.6.1-5.fc27.x86_64 10/14 Verifying : libstdc++-7.3.1-5.fc27.x86_64 11/14 Verifying : libunwind-1.2.1-3.fc27.x86_64 12/14 Verifying : openssl-libs-1:1.1.0h-3.fc27.x86_64 13/14 Verifying : openssl-libs-1:1.1.0g-1.fc27.x86_64 14/14 Installed: nginx.x86_64 1:1.12.1-1.fc27 gc.x86_64 7.6.0-7.fc27 gperftools-libs.x86_64 2.6.1-5.fc27 guile.x86_64 5:2.0.14-3.fc27 libatomic_ops.x86_64 7.4.6-3.fc27 libstdc++.x86_64 7.3.1-5.fc27 libtool-ltdl.x86_64 2.4.6-20.fc27 libunwind.x86_64 1.2.1-3.fc27 make.x86_64 1:4.2.1-4.fc27 nginx-filesystem.noarch 1:1.12.1-1.fc27 nginx-mimetypes.noarch 2.1.48-2.fc27 openssl.x86_64 1:1.1.0h-3.fc27 Upgraded: openssl-libs.x86_64 1:1.1.0h-3.fc27 Complete! 18 files removed ---> 69f8bccd26d0 Removing intermediate container 6ce2b5226e21 Step 5/8 : RUN mkdir -p /usr/share/nginx/html/images && curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /usr/share/nginx/html/images/cirros.img && rm -f /etc/nginx/nginx.conf ---> Running in fd6d1ae8e7f1   % Total % Received % Xferd Average Speed Time Time Time Current  Dload Upload Total Spent  Left Speed  0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0  0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0  29 12.1M 29 3712k 0 0 3712k 0 0:00:03 0:00:01 0:00:02 3060k 100 12.1M 100 12.1M 0 0 12.1M 0 0:00:01 0:00:01 --:--:-- 6992k  ---> ff141dfe1f7f Removing intermediate container fd6d1ae8e7f1 Step 6/8 : ADD nginx.conf /etc/nginx/ ---> d63eee857d75 Removing intermediate container 760d8662aa65 Step 7/8 : EXPOSE 80 ---> Running in 2ee1c7018fff ---> 878ae7cfbc54 Removing intermediate container 2ee1c7018fff Step 8/8 : LABEL "cdi-http-import-server" '' "kubevirt-functional-tests-k8s-1.10.3-dev0" '' ---> Running in 71c1c23d4dd5 ---> db0f4288f32a Removing intermediate container 71c1c23d4dd5 Successfully built db0f4288f32a hack/build-docker.sh push The push refers to a repository [localhost:33191/kubevirt/virt-controller] 09a71a60335e: Preparing 711968c63dc4: Preparing 39bae602f753: Preparing 711968c63dc4: Pushed 09a71a60335e: Pushed 39bae602f753: Pushed devel: digest: sha256:b9331e23a068d604ead5d9b38510f69e2544646ad44af2f70b2da6f3347bf15d size: 948 The push refers to a repository [localhost:33191/kubevirt/virt-launcher] 11ff0b4b5248: Preparing 26f86681e007: Preparing bf5a390d7244: Preparing b6bed20612c8: Preparing d75a40f62f10: Preparing 4ca95d1e0e98: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing a1359dc556dd: Preparing 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing 34fa414dfdf6: Waiting a1359dc556dd: Waiting 39bae602f753: Waiting 490c7c373332: Waiting 4b440db36f72: Waiting 530cc55618cd: Waiting 4ca95d1e0e98: Waiting b6bed20612c8: Pushed 26f86681e007: Pushed 11ff0b4b5248: Pushed 530cc55618cd: Pushed 34fa414dfdf6: Pushed a1359dc556dd: Pushed 490c7c373332: Pushed 39bae602f753: Mounted from kubevirt/virt-controller bf5a390d7244: Pushed d75a40f62f10: Pushed 4ca95d1e0e98: Pushed 4b440db36f72: Pushed devel: digest: sha256:11efa8e5bc02634df0ec5faa93b12d5f09f3202b1f88a8ceb28749bc22db77df size: 2828 The push refers to a repository [localhost:33191/kubevirt/virt-handler] c55b52f46698: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher c55b52f46698: Pushed devel: digest: sha256:fe341896e19d6a4a8f2e6b350e96b41795015650366b267e88537939145f7abc size: 741 The push refers to a repository [localhost:33191/kubevirt/virt-api] 5293292f71dc: Preparing 53839c3b2a5a: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler 53839c3b2a5a: Pushed 5293292f71dc: Pushed devel: digest: sha256:31e5fe50a74526ee1d7bbced2378cb976d839d1ac51d3a98fb95ab5e551c27f6 size: 948 The push refers to a repository [localhost:33191/kubevirt/disks-images-provider] f4c4ffc62e42: Preparing 9f9e4cdcb3eb: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api f4c4ffc62e42: Pushed 9f9e4cdcb3eb: Pushed devel: digest: sha256:9343dc25e84affdb11397ae55259be856469c1695f15593ce7cdb9012545fb96 size: 948 The push refers to a repository [localhost:33191/kubevirt/vm-killer] 151ffba76ca1: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/disks-images-provider 151ffba76ca1: Pushed devel: digest: sha256:6bc806dbd31658839d9d7713fa4995ae4818e315fc3debf10ec063b52bf50bec size: 740 The push refers to a repository [localhost:33191/kubevirt/registry-disk-v1alpha] 780c7b8dc263: Preparing 9e4c3ba110cf: Preparing 6709b2da72b8: Preparing 780c7b8dc263: Pushed 9e4c3ba110cf: Pushed 6709b2da72b8: Pushed devel: digest: sha256:e84dbce988da2290d684274718360fb3700008a1ce230277209d5491947a15ba size: 948 The push refers to a repository [localhost:33191/kubevirt/cirros-registry-disk-demo] 7c9fe64348f5: Preparing 780c7b8dc263: Preparing 9e4c3ba110cf: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Mounted from kubevirt/registry-disk-v1alpha 780c7b8dc263: Mounted from kubevirt/registry-disk-v1alpha 9e4c3ba110cf: Mounted from kubevirt/registry-disk-v1alpha 7c9fe64348f5: Pushed devel: digest: sha256:95a386708b682ddd31d696c6f66aa8dc6a48d0f41edb33eec47911e71427607e size: 1160 The push refers to a repository [localhost:33191/kubevirt/fedora-cloud-registry-disk-demo] 71178104b8ba: Preparing 780c7b8dc263: Preparing 9e4c3ba110cf: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Mounted from kubevirt/cirros-registry-disk-demo 780c7b8dc263: Mounted from kubevirt/cirros-registry-disk-demo 9e4c3ba110cf: Mounted from kubevirt/cirros-registry-disk-demo 71178104b8ba: Pushed devel: digest: sha256:ed0a451bab029393798b21285712b5296b25e49b86e6105f381ca8f7c93c19d5 size: 1161 The push refers to a repository [localhost:33191/kubevirt/alpine-registry-disk-demo] 4ab6a39330d2: Preparing 780c7b8dc263: Preparing 9e4c3ba110cf: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Mounted from kubevirt/fedora-cloud-registry-disk-demo 9e4c3ba110cf: Mounted from kubevirt/fedora-cloud-registry-disk-demo 780c7b8dc263: Mounted from kubevirt/fedora-cloud-registry-disk-demo 4ab6a39330d2: Pushed devel: digest: sha256:4c4cdf99148a77259074ba019be72086ccc60a7d99245a21cf1571483ae26576 size: 1160 The push refers to a repository [localhost:33191/kubevirt/subresource-access-test] 108494756641: Preparing d583c2eb3ac0: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/vm-killer d583c2eb3ac0: Pushed 108494756641: Pushed devel: digest: sha256:94beadfa7228a76c07b4857dd7d3afa0287cf08f16ef8af26c139d2446125735 size: 948 The push refers to a repository [localhost:33191/kubevirt/winrmcli] 3658db2c75ba: Preparing 7a99a4697526: Preparing 8146dcce8c7a: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/subresource-access-test 3658db2c75ba: Pushed 8146dcce8c7a: Pushed 7a99a4697526: Pushed devel: digest: sha256:7d82eea148d5c9f6c473d6e51ec7e10f5c5385208042d29c3e8d5036bd9257b0 size: 1165 The push refers to a repository [localhost:33191/kubevirt/cdi-http-import-server] 4c82655f5104: Preparing 4ac8a08596bb: Preparing 2a6b77090635: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/winrmcli 4c82655f5104: Pushed 4ac8a08596bb: Pushed 2a6b77090635: Pushed devel: digest: sha256:0e9c774829b9834f6714a09e9ec737da5b043873a80ed7a54fbec51e421ace75 size: 1160 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-alpha.5-17-g3a033fa ++ KUBEVIRT_VERSION=v0.7.0-alpha.5-17-g3a033fa + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli images/cdi-http-import-server' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33191/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ wc -l ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-dev ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-dev0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-alpha.5-17-g3a033fa ++ KUBEVIRT_VERSION=v0.7.0-alpha.5-17-g3a033fa + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli images/cdi-http-import-server' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33191/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-dev ]] + [[ k8s-1.10.3-dev =~ .*-dev ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/dev -R serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created service "virt-api" created deployment.extensions "virt-api" created service "virt-controller" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created persistentvolume "host-path-disk-datavolume1" created persistentvolume "host-path-disk-datavolume2" created persistentvolume "host-path-disk-datavolume3" created service "cdi-http-import-server" created deployment.extensions "cdi-http-import-server" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'cdi-http-import-server-7dbffbcb77-7f2vv 0/1 ContainerCreating 0 1s virt-api-7586947775-b5rr6 0/1 ContainerCreating 0 2s virt-controller-7d57d96b65-kc5ml 0/1 ContainerCreating 0 2s virt-controller-7d57d96b65-rg4k8 0/1 ContainerCreating 0 2s virt-handler-2bqgb 0/1 ContainerCreating 0 2s virt-handler-5gl6w 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running cdi-http-import-server-7dbffbcb77-7f2vv 0/1 ContainerCreating 0 2s virt-api-7586947775-b5rr6 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-kc5ml 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-rg4k8 0/1 ContainerCreating 0 3s virt-handler-2bqgb 0/1 ContainerCreating 0 3s virt-handler-5gl6w 0/1 ContainerCreating 0 3s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n 'false false' ']' + echo 'Waiting for KubeVirt containers to become ready ...' Waiting for KubeVirt containers to become ready ... + grep false + kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers false false + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE cdi-http-import-server-7dbffbcb77-7f2vv 1/1 Running 0 1m disks-images-provider-cqws5 1/1 Running 0 1m disks-images-provider-jc8h9 1/1 Running 0 1m etcd-node01 1/1 Running 0 10m kube-apiserver-node01 1/1 Running 0 10m kube-controller-manager-node01 1/1 Running 0 10m kube-dns-86f4d74b45-4wh4p 3/3 Running 0 11m kube-flannel-ds-4mk5q 1/1 Running 1 11m kube-flannel-ds-6b7qr 1/1 Running 0 10m kube-proxy-bxxhr 1/1 Running 0 10m kube-proxy-qdtx9 1/1 Running 0 11m kube-scheduler-node01 1/1 Running 0 10m virt-api-7586947775-b5rr6 1/1 Running 0 1m virt-controller-7d57d96b65-kc5ml 1/1 Running 0 1m virt-controller-7d57d96b65-rg4k8 1/1 Running 0 1m virt-handler-2bqgb 1/1 Running 0 1m virt-handler-5gl6w 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-dev/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 Waiting for rsyncd to be ready. go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1530219679 Will run 134 of 134 specs • [SLOW TEST:8.505 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:8.510 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:19.470 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.503 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • Failure [300.148 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 Timed out after 300.005s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:201 ------------------------------ • ------------------------------ • Failure [300.069 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should remove owner references on the VirtualMachineInstance if it is orphan deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:217 Timed out after 300.000s. Expected <[]v1.OwnerReference | len:0, cap:0>: nil not to be empty /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:224 ------------------------------ STEP: Starting the VirtualMachineInstance • Failure [300.095 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 Timed out after 300.000s. Expected success, but got an error: <*errors.StatusError | 0xc42003b050>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmirvbzz\" not found", Reason: "NotFound", Details: { Name: "testvmirvbzz", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmirvbzz" not found /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:150 ------------------------------ STEP: Creating a new VMI STEP: Waiting for the VMI's VirtualMachineInstance to start • Failure [120.066 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 Timed out after 120.000s. Expected success, but got an error: <*errors.StatusError | 0xc42084d440>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmidf965\" not found", Reason: "NotFound", Details: { Name: "testvmidf965", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmidf965" not found /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:284 ------------------------------ STEP: Starting the VirtualMachineInstance • Failure [300.076 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 Timed out after 300.000s. Expected success, but got an error: <*errors.StatusError | 0xc4206d75f0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmix7ch9\" not found", Reason: "NotFound", Details: { Name: "testvmix7ch9", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmix7ch9" not found /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:150 ------------------------------ STEP: Doing run: 0 STEP: Starting the VirtualMachineInstance • Failure [300.088 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 Timed out after 300.000s. Expected success, but got an error: <*errors.StatusError | 0xc42084cd80>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmiwzksz\" not found", Reason: "NotFound", Details: { Name: "testvmiwzksz", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmiwzksz" not found /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:150 ------------------------------ • Failure [360.074 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 Timed out after 360.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:353 ------------------------------ STEP: Creating new VMI, not running STEP: Starting the VirtualMachineInstance • Failure [300.077 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 Timed out after 300.000s. Expected success, but got an error: <*errors.StatusError | 0xc4204f5560>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmiqlsln\" not found", Reason: "NotFound", Details: { Name: "testvmiqlsln", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmiqlsln" not found /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:150 ------------------------------ STEP: getting an VMI STEP: Invoking virtctl start STEP: Getting the status of the VMI • Failure [360.132 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 Timed out after 360.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:453 ------------------------------ STEP: getting an VMI STEP: Invoking virtctl stop STEP: Ensuring VMI is running • Failure [360.060 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 Timed out after 360.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:480 ------------------------------ STEP: Starting a VirtualMachineInstance STEP: Waiting the VirtualMachineInstance start • Failure [180.208 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 Timed out after 90.006s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting a VirtualMachineInstance STEP: Waiting the VirtualMachineInstance start • Failure [180.237 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 Timed out after 90.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting a VirtualMachineInstance STEP: Waiting the VirtualMachineInstance start • Failure [180.233 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:118 should process provided cloud-init data [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:119 Timed out after 90.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Creating a user-data secret STEP: Starting a VirtualMachineInstance STEP: Waiting the VirtualMachineInstance start • Failure [180.274 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 should take user-data from k8s secret [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:162 Timed out after 90.005s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Creating a new VirtualMachineInstance • Failure [180.240 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros [It] /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 Timed out after 90.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Creating a new VirtualMachineInstance • Failure [180.234 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora [It] /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 Timed out after 90.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Creating a new VirtualMachineInstance • Failure [180.234 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 Timed out after 90.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ •••••••••••• ------------------------------ • Failure in Spec Setup (BeforeEach) [60.035 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 Timed out after 30.003s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ • Failure in Spec Setup (BeforeEach) [60.030 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:98 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:103 Timed out after 30.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ • Failure in Spec Setup (BeforeEach) [60.029 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose ClusterIP UDP service [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:147 Should expose a ClusterIP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:151 Timed out after 30.003s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ • Failure in Spec Setup (BeforeEach) [60.025 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose NodePort UDP service [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:179 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:184 Timed out after 30.003s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Creating a VMRS object with 2 replicas STEP: Start the replica set STEP: Checking the number of ready replicas • Failure in Spec Setup (BeforeEach) [120.015 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:227 Expose ClusterIP service [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:260 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:264 Timed out after 120.000s. Expected : 0 to equal : 2 /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:245 ------------------------------ Service cluster-ip-ovm successfully exposed for virtualmachine testvmiwnp9f STEP: Creating an OVM object STEP: Creating the OVM STEP: Exposing a service on the OVM using virtctl STEP: Calling the start command STEP: Getting the status of the OVM • Failure in Spec Setup (BeforeEach) [120.082 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an Offline VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:292 Expose ClusterIP service [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:336 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:337 Timed out after 120.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:323 ------------------------------ •STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start ------------------------------ • Failure [180.244 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 90.005s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start • Failure [180.240 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 90.007s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting and stopping the VirtualMachineInstance number of times STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start • Failure [180.233 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 90.003s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ panic: test timed out after 1h30m0s goroutine 9802 [running]: testing.(*M).startAlarm.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1240 +0xfc created by time.goFunc /gimme/.gimme/versions/go1.10.linux.amd64/src/time/sleep.go:172 +0x44 goroutine 1 [chan receive, 90 minutes]: testing.(*T).Run(0xc4203fde00, 0x124d089, 0x9, 0x12d7348, 0x47fa16) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:825 +0x301 testing.runTests.func1(0xc4203fdd10) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1063 +0x64 testing.tRunner(0xc4203fdd10, 0xc420713df8) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 testing.runTests(0xc42031af20, 0x1b16e00, 0x1, 0x1, 0x412009) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1061 +0x2c4 testing.(*M).Run(0xc4202cf380, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:978 +0x171 main.main() _testmain.go:44 +0x151 goroutine 5 [chan receive]: kubevirt.io/kubevirt/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x1b3db80) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:879 +0x8b created by kubevirt.io/kubevirt/vendor/github.com/golang/glog.init.0 /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:410 +0x203 goroutine 6 [syscall, 90 minutes]: os/signal.signal_recv(0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/sigqueue.go:139 +0xa6 os/signal.loop() /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:22 +0x22 created by os/signal.init.0 /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:28 +0x41 goroutine 22 [select]: kubevirt.io/kubevirt/tests.(*ObjectEventWatcher).Watch(0xc420aaee20, 0xc42060e120) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:286 +0x57e kubevirt.io/kubevirt/tests.(*ObjectEventWatcher).WaitFor(0xc420aaee20, 0x1249137, 0x6, 0x10c6380, 0x134b4a0, 0x0) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:296 +0xba kubevirt.io/kubevirt/tests.waitForVMIStart(0x1353780, 0xc420366d80, 0x5a, 0x0, 0x1089a00, 0xc4202a7480) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1013 +0x4d2 kubevirt.io/kubevirt/tests.WaitForSuccessfulVMIStartWithTimeout(0x1353780, 0xc420366d80, 0x5a, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1037 +0x44 kubevirt.io/kubevirt/tests_test.glob..func7.2(0xc420366900, 0x0, 0x5a, 0x0) /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:66 +0x266 kubevirt.io/kubevirt/tests_test.glob..func7.3.1.2(0x12d7338) /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:92 +0x188 reflect.Value.call(0x1085240, 0xc4202f6f90, 0x13, 0x12475ad, 0x4, 0xc4202d62c0, 0x1, 0x1, 0xc4200c9d60, 0x0, ...) /gimme/.gimme/versions/go1.10.linux.amd64/src/reflect/value.go:447 +0x969 reflect.Value.Call(0x1085240, 0xc4202f6f90, 0x13, 0xc4202d62c0, 0x1, 0x1, 0xc420aaf3b8, 0x20, 0xc420aaf3b0) /gimme/.gimme/versions/go1.10.linux.amd64/src/reflect/value.go:308 +0xa4 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table.TableEntry.generateIt.func1() /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:40 +0x57 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc42058d8c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:109 +0x9c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc42058d8c0, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:63 +0x13e kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc4202d64a0, 0x1350780, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:25 +0x7f kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc42083d040, 0x0, 0x1350780, 0xc4200c4f60) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:176 +0x5a6 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc42083d040, 0x1350780, 0xc4200c4f60) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:127 +0xe3 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc42025ac80, 0xc42083d040, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:198 +0x10d kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc42025ac80, 0x12d8001) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:168 +0x32c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc42025ac80, 0xb) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:64 +0xdc kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc4200c8370, 0x7fba5c28fc48, 0xc4203fde00, 0x124f44f, 0xb, 0xc42031b020, 0x2, 0x2, 0x136a6a0, 0xc4200c4f60, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x27c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x13514c0, 0xc4203fde00, 0x124f44f, 0xb, 0xc42031af80, 0x2, 0x2, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:218 +0x253 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x13514c0, 0xc4203fde00, 0x124f44f, 0xb, 0xc42040a880, 0x1, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:206 +0x129 kubevirt.io/kubevirt/tests_test.TestTests(0xc4203fde00) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:42 +0xaa testing.tRunner(0xc4203fde00, 0x12d7348) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:824 +0x2e0 goroutine 23 [chan receive, 90 minutes]: kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).registerForInterrupts(0xc42025ac80) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:220 +0xc0 created by kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:59 +0x60 goroutine 9 [select, 90 minutes, locked to thread]: runtime.gopark(0x12d90d8, 0x0, 0x1249e51, 0x6, 0x18, 0x1) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/proc.go:291 +0x11a runtime.selectgo(0xc42047f750, 0xc4200ba180) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/select.go:392 +0xe50 runtime.ensureSigM.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/signal_unix.go:549 +0x1f4 runtime.goexit() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/asm_amd64.s:2361 +0x1 goroutine 40 [IO wait]: internal/poll.runtime_pollWait(0x7fba5c2ddea0, 0x72, 0xc4206b9850) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/netpoll.go:173 +0x57 internal/poll.(*pollDesc).wait(0xc420277098, 0x72, 0xffffffffffffff00, 0x1352460, 0x1a2e638) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:85 +0x9b internal/poll.(*pollDesc).waitRead(0xc420277098, 0xc420712000, 0x4000, 0x4000) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:90 +0x3d internal/poll.(*FD).Read(0xc420277080, 0xc420712000, 0x4000, 0x4000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_unix.go:157 +0x17d net.(*netFD).Read(0xc420277080, 0xc420712000, 0x4000, 0x4000, 0xc4206b9a30, 0x877382, 0x119eba0) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/fd_unix.go:202 +0x4f net.(*conn).Read(0xc4204e6000, 0xc420712000, 0x4000, 0x4000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/net.go:176 +0x6a crypto/tls.(*block).readFromUntil(0xc420378090, 0x7fba5c1f9000, 0xc4204e6000, 0x5, 0xc4204e6000, 0x100000001247919) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:493 +0x96 crypto/tls.(*Conn).readRecord(0xc420534380, 0x12d9217, 0xc4205344a0, 0x10) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:595 +0xe0 crypto/tls.(*Conn).Read(0xc420534380, 0xc4206a7000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:1156 +0x100 bufio.(*Reader).Read(0xc4206d4720, 0xc420530118, 0x9, 0x9, 0x2, 0xc4206b9c70, 0x405985) /gimme/.gimme/versions/go1.10.linux.amd64/src/bufio/bufio.go:216 +0x238 io.ReadAtLeast(0x134f440, 0xc4206d4720, 0xc420530118, 0x9, 0x9, 0x9, 0xc4201009e0, 0xc4206b9d30, 0x4056b7) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:309 +0x86 io.ReadFull(0x134f440, 0xc4206d4720, 0xc420530118, 0x9, 0x9, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:327 +0x58 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.readFrameHeader(0xc420530118, 0x9, 0x9, 0x134f440, 0xc4206d4720, 0x0, 0x0, 0x1, 0xc4206b9df8) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:237 +0x7b kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc4205300e0, 0xc420718c60, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:492 +0xa4 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc4206b9fb0, 0x12d8210, 0xc4206b37b0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1428 +0x8e kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc4201fa000) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1354 +0x76 created by kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Transport).newClientConn /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:579 +0x651 goroutine 9834 [semacquire]: sync.runtime_notifyListWait(0xc42025a680, 0xc400000000) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/sema.go:510 +0x10b sync.(*Cond).Wait(0xc42025a670) /gimme/.gimme/versions/go1.10.linux.amd64/src/sync/cond.go:56 +0x80 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*pipe).Read(0xc42025a668, 0xc420742800, 0x200, 0x200, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/pipe.go:64 +0x8f kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.transportResponseBody.Read(0xc42025a640, 0xc420742800, 0x200, 0x200, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1674 +0xa1 encoding/json.(*Decoder).refill(0xc4201a65a0, 0x2b2034313a6f672e, 0x6b0a66377830) /gimme/.gimme/versions/go1.10.linux.amd64/src/encoding/json/stream.go:159 +0x132 encoding/json.(*Decoder).readValue(0xc4201a65a0, 0x0, 0x0, 0x10d68e0) /gimme/.gimme/versions/go1.10.linux.amd64/src/encoding/json/stream.go:134 +0x23d encoding/json.(*Decoder).Decode(0xc4201a65a0, 0x10f00e0, 0xc4206f3420, 0x746e692f6f676b6e, 0x70732f6c616e7265) /gimme/.gimme/versions/go1.10.linux.amd64/src/encoding/json/stream.go:63 +0x78 kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/framer.(*jsonFrameReader).Read(0xc420718cf0, 0xc420399400, 0x400, 0x400, 0xc4209c4f80, 0x40, 0x38) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/util/framer/framer.go:150 +0x295 kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/runtime/serializer/streaming.(*decoder).Decode(0xc420614e10, 0x0, 0x13575c0, 0xc4209c4f80, 0x3963333830323463, 0x29307830202c3063, 0x2f746f6f722f090a, 0x6b2f6372732f6f67, 0x2e74726976656275) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/runtime/serializer/streaming/streaming.go:77 +0x95 kubevirt.io/kubevirt/vendor/k8s.io/client-go/rest/watch.(*Decoder).Decode(0xc4206f33e0, 0x12d8798, 0x0, 0x0, 0x0, 0x687469672f726f64, 0x6f2f6d6f632e6275) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/client-go/rest/watch/decoder.go:49 +0x7c kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc420718d20) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:93 +0x12e created by kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:60 +0xa8 goroutine 9835 [chan receive]: kubevirt.io/kubevirt/tests.(*ObjectEventWatcher).Watch.func3(0x1357a40, 0xc420718d20, 0x0, 0xc420a9e780, 0xc4206a4540) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:273 +0x93 created by kubevirt.io/kubevirt/tests.(*ObjectEventWatcher).Watch /root/go/src/kubevirt.io/kubevirt/tests/utils.go:271 +0x4b3 make: *** [functest] Error 2 + make cluster-down ./cluster/down.sh