+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... 2018/06/28 17:04:35 Waiting for host: 192.168.66.101:22 2018/06/28 17:04:38 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/06/28 17:04:46 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: connection refused. Sleeping 5s 2018/06/28 17:04:51 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 24.007955 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:f296e4b47652fa74d3568e461948d93b2049c9e2e490e174846dafd4d81fe968 + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/06/28 17:05:30 Waiting for host: 192.168.66.102:22 2018/06/28 17:05:33 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/06/28 17:05:45 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 41s v1.10.3 node02 Ready 18s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ kubectl get nodes --no-headers ++ grep NotReady ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 42s v1.10.3 node02 Ready 19s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:37772/kubevirt/virt-controller:devel Untagged: localhost:37772/kubevirt/virt-controller@sha256:26a92436dfeca964af9a4766a03df18e49ab4b515c35384354d8702a1fd19c86 Deleted: sha256:6daa5cf2c228030a436fbbcebb41936a3c0ad6332ccf6a5c4200332e44d1103e Deleted: sha256:db77f74041cb93678c826345a3f8ca5627f4bb9e15473dcc779b831691ff095e Deleted: sha256:07355c0a2cdf72d00e6555bab4bc08d007cb9cfa29c7062e600dfff95b3749e9 Deleted: sha256:54e91cbd4f594e1f6e59c7fee117d088e55d6e91aa66bfcda7ad6f33ec24ee03 Untagged: localhost:37772/kubevirt/virt-launcher:devel Untagged: localhost:37772/kubevirt/virt-launcher@sha256:82d97df85eb6b0a54ee46382275e65b30e46f15131338ef7b3ea8e459e8c7178 Deleted: sha256:be1688c43c166e2c0866c0491d22ba9e56f76a0e3c97991a3c4fc7095648d12c Deleted: sha256:88ba56adae4cecb47d2d7f3fe19c4fcfd02ab445d32d9a1922a3fa4fd4702e9d Deleted: sha256:da2000945fce0406232aa2918f76eaad6ecd96697c34827e5610a449a22de9a9 Deleted: sha256:06e6231623c5af2066a6232bee7a65bf0872f75e23d2537dc2547baee75b6256 Deleted: sha256:c84f39f4dfb230b2222b116c4d442f8e365aab2510fab11050c6a9540256c7ff Deleted: sha256:b9c9786d5bd7fed61d5a70a6ed80848547df0fa128b910ed8668ce4c77e18606 Deleted: sha256:85f5626a189d66f5ea01a569e68105bc5612371eb3cd7394f446cc29a9d34a0e Deleted: sha256:3e7a05d90f794c78e7576b64986d9a2efb8ca883fd8c7dc38258660f3ffcbb58 Deleted: sha256:924643e67488ed6dd45f3a680148b2a8387dae7d18e84f4b8638e55446e52e80 Deleted: sha256:130dd49fa5d8c67f78f4349051d887ddea459c8f30251abc4c323001b41e58f7 Deleted: sha256:17431a7981559943a5e67e3e74badeeaca871529594563ab2775f389bf5ec402 Deleted: sha256:8599bf67fd25c22eed7a7f0659875527b005c488f82f7c7dfa27281a28f1fa55 Deleted: sha256:526a24146750d669a42b2b5586c4458437e0a5c2398224fb993b25c4245b3315 Deleted: sha256:b57012a77d46291863bc51bfc4297c17edb0ca63f783b4b3cb7efd5ed8285d5c Deleted: sha256:1e3cffbc29868dbaeb205a51be25fef7082627746ac0c49a20f3a38b43a9ba95 Deleted: sha256:0558ae324fb1dbcd4ae764a8737816bfe2b9588ab111342214decdb98640e1ba Deleted: sha256:b3b440dee3a324332f9f9437abd36ebc868dd65598cb7d4ce919b66668a85556 Deleted: sha256:b36290506676563a7a05e42108e1efcea494224b2132cd084718cc6e1919069f Untagged: localhost:37772/kubevirt/virt-handler:devel Untagged: localhost:37772/kubevirt/virt-handler@sha256:04f55716390793b23ea2333dd6c4fe97ec45e4bdfecf76590f09f4e6e7e32112 Deleted: sha256:f7f452a5c7994614d3ba368732c6ab781ca48178c47822681b32851ce64f7f1a Deleted: sha256:8ec9614e6ecdda7b45293400dfcc5f6a68f4913adb0d002fb60f45bbe4400b98 Deleted: sha256:5c6c4eb3723235aa94a7259a79b4b10c5abc79b1c5b49a1cee9afb81684e903f Deleted: sha256:a0900c123bdc82c6a309e3c5441dc3b8bd0c8b5a842108e53752393413dc69d0 Untagged: localhost:37772/kubevirt/virt-api:devel Untagged: localhost:37772/kubevirt/virt-api@sha256:fd2b471edd94c03c0bd11a7607b4d31d20b905d03da97bf2e1faed5a8f24f538 Deleted: sha256:6d23cafcd1c7bae00ecd90615ea6dcf5adf55d0dda3d7f300bfc172e07f66093 Deleted: sha256:61713bee49a3f52404313feed930b6aa6ac990a31edacc93880be69b824e55a0 Deleted: sha256:d9ec71f7d7f13f92eed88badaf5aa9b7184b61583200cdbb5cb5c4d54adc28db Deleted: sha256:5e3ff5130ddf0c462db70ebb41fb651a77b118d4cb098cd684f996a5886b8e18 Untagged: localhost:37772/kubevirt/subresource-access-test:devel Untagged: localhost:37772/kubevirt/subresource-access-test@sha256:aec2eac1da7884ef35cba60ea2914c9278a15c4d7ea956771c5b528054311eeb Deleted: sha256:a647df7ec09d056d00d1c9ba253328b788745b7a87c0e7e87fe5482d8feedc11 Deleted: sha256:e6b25ddf09475361c97171f810684f3d677fa7c092213912d56bdec446aab3f0 Deleted: sha256:3c80638d8de61a41b63407d8db15ea2f1e2b9b8c303bb55fead60bc8c79699ad Deleted: sha256:4a5f4bdbb2b5daf8a35b5340a0417864d019e27d4e1f0457e71123e7a98e24ec sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && KUBEVIRT_VERSION= ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 37.27 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 5c7d576d7c73 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> 83ec280c04c4 Step 5/8 : USER 1001 ---> Using cache ---> 92b648073fa2 Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> f337eb8c49e4 Removing intermediate container 63fc8d8719bd Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in aecaea17dd02 ---> f81628941462 Removing intermediate container aecaea17dd02 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-controller" '' ---> Running in f63522756264 ---> d528d4b2b0e3 Removing intermediate container f63522756264 Successfully built d528d4b2b0e3 Sending build context to Docker daemon 39.09 MB Step 1/10 : FROM kubevirt/libvirt:3.7.0 ---> 60c80c8f7523 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0b7dc10e33a1 Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> c3422738d80a Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> 422cc22f804b Removing intermediate container 0b891a18b721 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 7f10f183e6a5 Removing intermediate container 5aa6e0da8815 Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in 4ac4f3131492  ---> 1b10909c79d5 Removing intermediate container 4ac4f3131492 Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 5a598430dec4  ---> be0d60d83c57 Removing intermediate container 5a598430dec4 Step 8/10 : COPY entrypoint.sh libvirtd.sh sh.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 4568747e6cca Removing intermediate container 33c8813d3fe3 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in 2e3545e39745 ---> c5c82cb3228b Removing intermediate container 2e3545e39745 Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-launcher" '' ---> Running in b943b0fc5c90 ---> 65fe77c457fb Removing intermediate container b943b0fc5c90 Successfully built 65fe77c457fb Sending build context to Docker daemon 40.64 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> edf17bc1a216 Removing intermediate container a3c234f5d581 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 4c71df870d29 ---> cfc3f7ec3019 Removing intermediate container 4c71df870d29 Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-handler" '' ---> Running in f0156cb65b0a ---> 4d4fc2b059f6 Removing intermediate container f0156cb65b0a Successfully built 4d4fc2b059f6 Sending build context to Docker daemon 38.09 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 1ee495c45665 Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> d5d529a63aa5 Step 5/8 : USER 1001 ---> Using cache ---> b8cd6b01e5a1 Step 6/8 : COPY virt-api /usr/bin/virt-api ---> fc1c584b6031 Removing intermediate container 46add34c090e Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in 8afb5174a360 ---> d7589ffd2ca3 Removing intermediate container 8afb5174a360 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "virt-api" '' ---> Running in 0a6fac198d99 ---> 115499ec9bbc Removing intermediate container 0a6fac198d99 Successfully built 115499ec9bbc Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:27 ---> 9110ae7f579f Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/7 : ENV container docker ---> Using cache ---> cc783cf25db1 Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 57c4214fa0a2 Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 7678d66ff4f8 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 3b51d5324eb3 Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> fd878c110601 Successfully built fd878c110601 Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/5 : ENV container docker ---> Using cache ---> cc783cf25db1 Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> f43092ff797b Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "vm-killer" '' ---> Using cache ---> 1a20b5bd01ab Successfully built 1a20b5bd01ab Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> bcec0ae8107e Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> eb2ecba9d79d Step 3/7 : ENV container docker ---> Using cache ---> 7c8d23462894 Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> 1121e08529fa Step 5/7 : ADD entry-point.sh / ---> Using cache ---> 1e9b22eccc69 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> 918eb49e60d7 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "registry-disk-v1alpha" '' ---> Using cache ---> b877bc559273 Successfully built b877bc559273 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33159/kubevirt/registry-disk-v1alpha:devel ---> b877bc559273 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 59d68ae7ad78 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 212cf9b33d74 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> 453aac2b2006 Successfully built 453aac2b2006 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33159/kubevirt/registry-disk-v1alpha:devel ---> b877bc559273 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> db33e8b9e01c Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> 8a3ba18a9a31 Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> 4299ed7458a2 Successfully built 4299ed7458a2 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33159/kubevirt/registry-disk-v1alpha:devel ---> b877bc559273 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> db33e8b9e01c Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 81d66eaf6b5b Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release1" '' ---> Using cache ---> d0dcc2bba929 Successfully built d0dcc2bba929 Sending build context to Docker daemon 34.91 MB Step 1/8 : FROM fedora:27 ---> 9110ae7f579f Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> a93c2ef4d06c Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> b3278975ff14 Step 5/8 : USER 1001 ---> Using cache ---> 7b9c3f06521e Step 6/8 : COPY subresource-access-test /subresource-access-test ---> df0884aee10e Removing intermediate container c98fc6a5a6b8 Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in 8ed893b45851 ---> 03c51142384e Removing intermediate container 8ed893b45851 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "subresource-access-test" '' ---> Running in 8415fb9fb69f ---> 8208c1524254 Removing intermediate container 8415fb9fb69f Successfully built 8208c1524254 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:27 ---> 9110ae7f579f Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> a96d7b80d8b6 Step 3/9 : ENV container docker ---> Using cache ---> cc783cf25db1 Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 1f969d60dcdb Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> ec50d6cdb417 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 481568cf019c Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 8d12f44cea40 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 5f29a8914a5a Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release1" '' "winrmcli" '' ---> Using cache ---> 59ebf0b8b3ca Successfully built 59ebf0b8b3ca hack/build-docker.sh push The push refers to a repository [localhost:33159/kubevirt/virt-controller] e7b02c5de854: Preparing 711968c63dc4: Preparing 39bae602f753: Preparing 39bae602f753: Waiting 711968c63dc4: Pushed e7b02c5de854: Pushed 39bae602f753: Pushed devel: digest: sha256:187dbb547c946d88f3a718d87742bdfd7599cd3fe9cd6e28d543d11440bfe965 size: 948 The push refers to a repository [localhost:33159/kubevirt/virt-launcher] d38a498de0f4: Preparing 762725699365: Preparing ff711f354eae: Preparing f4bfcb9dfd47: Preparing 63f3fab08e6d: Preparing 4ca95d1e0e98: Preparing 530cc55618cd: Preparing 34fa414dfdf6: Preparing a1359dc556dd: Preparing 490c7c373332: Preparing 4b440db36f72: Preparing 39bae602f753: Preparing 4ca95d1e0e98: Waiting 530cc55618cd: Waiting 34fa414dfdf6: Waiting a1359dc556dd: Waiting 490c7c373332: Waiting 4b440db36f72: Waiting 39bae602f753: Waiting d38a498de0f4: Pushed f4bfcb9dfd47: Pushed 762725699365: Pushed 530cc55618cd: Pushed ff711f354eae: Pushed 34fa414dfdf6: Pushed 490c7c373332: Pushed a1359dc556dd: Pushed 39bae602f753: Mounted from kubevirt/virt-controller 63f3fab08e6d: Pushed 4ca95d1e0e98: Pushed 4b440db36f72: Pushed devel: digest: sha256:e2dbca11464b1f7d2dfea4d5efaec72bd5345cb9cabb87937970ed295d4b12c4 size: 2828 The push refers to a repository [localhost:33159/kubevirt/virt-handler] 4ccef5e6ebba: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-launcher 4ccef5e6ebba: Pushed devel: digest: sha256:f455eeb8e39782f6bbf7b201cd54fdeeefc1478443580561e3d5465fae0fa507 size: 741 The push refers to a repository [localhost:33159/kubevirt/virt-api] 0c8a64d7e058: Preparing 53839c3b2a5a: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-handler 53839c3b2a5a: Pushed 0c8a64d7e058: Pushed devel: digest: sha256:935a8f414936500a88dd09ebedf40726111e17e010d6ade341e1a310f7e294ce size: 948 The push refers to a repository [localhost:33159/kubevirt/disks-images-provider] 89bdae0e278d: Preparing 1d8234f31b69: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/virt-api 89bdae0e278d: Pushed 1d8234f31b69: Pushed devel: digest: sha256:dfb743da4fe72e1dec7b0a4a6eb7335773b5b2fc9481954ad5b0aac85dc178ef size: 948 The push refers to a repository [localhost:33159/kubevirt/vm-killer] 151ffba76ca1: Preparing 39bae602f753: Preparing 39bae602f753: Mounted from kubevirt/disks-images-provider 151ffba76ca1: Pushed devel: digest: sha256:3dad538a15f9b4b672bbfa6078a83c59b5e9c58faaf3619c25262e715776d884 size: 740 The push refers to a repository [localhost:33159/kubevirt/registry-disk-v1alpha] 780c7b8dc263: Preparing 9e4c3ba110cf: Preparing 6709b2da72b8: Preparing 780c7b8dc263: Pushed 9e4c3ba110cf: Pushed 6709b2da72b8: Pushed devel: digest: sha256:f497ce77aabb0e1f6384ee0fa0819cc50dc8d3e4b0f22d73431100da8df3d39c size: 948 The push refers to a repository [localhost:33159/kubevirt/cirros-registry-disk-demo] b3cf2dc180be: Preparing 780c7b8dc263: Preparing 9e4c3ba110cf: Preparing 6709b2da72b8: Preparing 780c7b8dc263: Waiting 9e4c3ba110cf: Waiting 6709b2da72b8: Waiting 9e4c3ba110cf: Mounted from kubevirt/registry-disk-v1alpha 780c7b8dc263: Mounted from kubevirt/registry-disk-v1alpha 6709b2da72b8: Mounted from kubevirt/registry-disk-v1alpha b3cf2dc180be: Pushed devel: digest: sha256:bd1ef74fa265586e74b1d5ad22c3df9aef31bfe5a7bcc76e717388ee5826b1f7 size: 1160 The push refers to a repository [localhost:33159/kubevirt/fedora-cloud-registry-disk-demo] 8a30c016c91d: Preparing 780c7b8dc263: Preparing 9e4c3ba110cf: Preparing 6709b2da72b8: Preparing 6709b2da72b8: Mounted from kubevirt/cirros-registry-disk-demo 9e4c3ba110cf: Mounted from kubevirt/cirros-registry-disk-demo 780c7b8dc263: Mounted from kubevirt/cirros-registry-disk-demo 8a30c016c91d: Pushed devel: digest: sha256:6f4a0c6dfc6e031c2d2a28a6e7f39f9048e9e934e13112ceadb0fc418b801b21 size: 1161 The push refers to a repository [localhost:33159/kubevirt/alpine-registry-disk-demo] c34a6a0daf84: Preparing 780c7b8dc263: Preparing 9e4c3ba110cf: Preparing 6709b2da72b8: Preparing 780c7b8dc263: Mounted from kubevirt/fedora-cloud-registry-disk-demo 6709b2da72b8: Mounted from kubevirt/fedora-cloud-registry-disk-demo 9e4c3ba110cf: Mounted from kubevirt/fedora-cloud-registry-disk-demo c34a6a0daf84: Pushed devel: digest: sha256:bb929353c6fdd940b7f350201b2c63308cc352853f655686a3c9c644a2a00e12 size: 1160 The push refers to a repository [localhost:33159/kubevirt/subresource-access-test] cad5fe9f82db: Preparing d583c2eb3ac0: Preparing 39bae602f753: Preparing 39bae602f753: Waiting 39bae602f753: Mounted from kubevirt/vm-killer d583c2eb3ac0: Pushed cad5fe9f82db: Pushed devel: digest: sha256:cdd6005072e42ac584a085262ab1e58839bf24fb6294cf56cb79aa031f96d1b5 size: 948 The push refers to a repository [localhost:33159/kubevirt/winrmcli] 3658db2c75ba: Preparing 7a99a4697526: Preparing 8146dcce8c7a: Preparing 39bae602f753: Preparing 8146dcce8c7a: Waiting 39bae602f753: Waiting 3658db2c75ba: Pushed 39bae602f753: Mounted from kubevirt/subresource-access-test 8146dcce8c7a: Pushed 7a99a4697526: Pushed devel: digest: sha256:1d97641faef622e03917c2717efbfb9528d04fe98d27661da41e10e99ac78390 size: 1165 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-alpha.5-16-g22bf403 ++ KUBEVIRT_VERSION=v0.7.0-alpha.5-16-g22bf403 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33159/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + read p + grep foregroundDeleteVirtualMachine error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release1 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-alpha.5-16-g22bf403 ++ KUBEVIRT_VERSION=v0.7.0-alpha.5-16-g22bf403 + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33159/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-release ]] + [[ k8s-1.10.3-release =~ .*-dev ]] + [[ k8s-1.10.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created service "virt-api" created deployment.extensions "virt-api" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79764579-bjngd 0/1 ContainerCreating 0 6s virt-api-7d79764579-nlhqp 0/1 ContainerCreating 0 6s virt-controller-7d57d96b65-n8vks 0/1 ContainerCreating 0 6s virt-controller-7d57d96b65-r82ln 0/1 ContainerCreating 0 6s virt-handler-9xmpn 0/1 ContainerCreating 0 6s virt-handler-gsdr4 0/1 ContainerCreating 0 6s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-hzz92 0/1 ContainerCreating 0 2s disks-images-provider-rhn8n 0/1 ContainerCreating 0 2s virt-api-7d79764579-bjngd 0/1 ContainerCreating 0 7s virt-api-7d79764579-nlhqp 0/1 ContainerCreating 0 7s virt-controller-7d57d96b65-n8vks 0/1 ContainerCreating 0 7s virt-controller-7d57d96b65-r82ln 0/1 ContainerCreating 0 7s virt-handler-9xmpn 0/1 ContainerCreating 0 7s virt-handler-gsdr4 0/1 ContainerCreating 0 7s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'disks-images-provider-hzz92 0/1 ContainerCreating 0 37s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running + true + sleep 30 + current_time=60 + '[' 60 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-hzz92 1/1 Running 0 1m disks-images-provider-rhn8n 1/1 Running 0 1m etcd-node01 1/1 Running 0 8m kube-apiserver-node01 1/1 Running 0 8m kube-controller-manager-node01 1/1 Running 0 8m kube-dns-86f4d74b45-9wr5f 3/3 Running 0 9m kube-flannel-ds-f5s2f 1/1 Running 0 9m kube-flannel-ds-sgxnb 1/1 Running 0 9m kube-proxy-qzpqm 1/1 Running 0 9m kube-proxy-zbkpb 1/1 Running 0 9m kube-scheduler-node01 1/1 Running 0 9m virt-api-7d79764579-bjngd 1/1 Running 0 1m virt-api-7d79764579-nlhqp 1/1 Running 0 1m virt-controller-7d57d96b65-n8vks 1/1 Running 0 1m virt-controller-7d57d96b65-r82ln 1/1 Running 0 1m virt-handler-9xmpn 1/1 Running 0 1m virt-handler-gsdr4 1/1 Running 0 1m + for i in '${namespaces[@]}' + current_time=0 ++ grep -v Running ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" sha256:6eacca7072242103a52e09bde728cf8c3c4134c37779287f25cbf1a1b93180b2 go version go1.10 linux/amd64 go version go1.10 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1530206232 Will run 134 of 134 specs •STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start ------------------------------ • Failure [180.236 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 90.005s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start • Failure [180.238 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 90.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting and stopping the VirtualMachineInstance number of times STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start • Failure [180.235 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 90.005s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting and stopping the VirtualMachineInstance number of times STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start • Failure [180.240 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Timed out after 90.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start • Failure [180.238 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 Timed out after 90.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start • Failure [180.246 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 Timed out after 90.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start • Failure [180.238 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 Timed out after 90.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting the VirtualMachineInstance STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start • Failure [180.236 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 Timed out after 90.005s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ STEP: Starting and stopping the VirtualMachineInstance number of times STEP: Starting a VirtualMachineInstance STEP: Waiting until the VirtualMachineInstance will start • Failure [240.277 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 Timed out after 120.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1328 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1328 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.003 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1328 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.005 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1328 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1328 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.004 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1328 ------------------------------ • Failure in Spec Setup (BeforeEach) [60.027 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:38 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:47 should start the virtial machine with slirp interface [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:57 Timed out after 30.005s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ • Failure in Spec Setup (BeforeEach) [60.026 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:38 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:47 should return "Hello World" when connecting to localhost on port 80 [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:69 Timed out after 30.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ • Failure in Spec Setup (BeforeEach) [60.028 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:38 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:47 should reject the connecting to localhost and port different than 80 [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:82 Timed out after 30.005s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ • Failure in Spec Setup (BeforeEach) [60.029 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:38 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:47 should be able to communicate with the outside world [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:93 Timed out after 30.005s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ ••••••••••• ------------------------------ • [SLOW TEST:6.610 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.807 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.694 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:5.723 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • Failure [300.121 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 Timed out after 300.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:201 ------------------------------ • ------------------------------ • Failure [300.059 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should remove owner references on the VirtualMachineInstance if it is orphan deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:217 Timed out after 300.000s. Expected <[]v1.OwnerReference | len:0, cap:0>: nil not to be empty /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:224 ------------------------------ STEP: Starting the VirtualMachineInstance • Failure [300.070 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 Timed out after 300.000s. Expected success, but got an error: <*errors.StatusError | 0xc4209b7b00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmiclcg7\" not found", Reason: "NotFound", Details: { Name: "testvmiclcg7", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmiclcg7" not found /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:150 ------------------------------ STEP: Creating a new VMI STEP: Waiting for the VMI's VirtualMachineInstance to start • Failure [120.060 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 Timed out after 120.000s. Expected success, but got an error: <*errors.StatusError | 0xc4205401b0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmi59p7f\" not found", Reason: "NotFound", Details: { Name: "testvmi59p7f", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmi59p7f" not found /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:284 ------------------------------ STEP: Starting the VirtualMachineInstance • Failure [300.082 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 Timed out after 300.000s. Expected success, but got an error: <*errors.StatusError | 0xc4200dcd80>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmi9kbl2\" not found", Reason: "NotFound", Details: { Name: "testvmi9kbl2", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmi9kbl2" not found /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:150 ------------------------------ STEP: Doing run: 0 STEP: Starting the VirtualMachineInstance • Failure [300.072 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 Timed out after 300.000s. Expected success, but got an error: <*errors.StatusError | 0xc42031f5f0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmi445bq\" not found", Reason: "NotFound", Details: { Name: "testvmi445bq", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmi445bq" not found /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:150 ------------------------------ • Failure [360.066 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 Timed out after 360.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:353 ------------------------------ STEP: Creating new VMI, not running STEP: Starting the VirtualMachineInstance • Failure [300.077 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 Timed out after 300.000s. Expected success, but got an error: <*errors.StatusError | 0xc42003a6c0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""}, Status: "Failure", Message: "virtualmachineinstances.kubevirt.io \"testvmic29f2\" not found", Reason: "NotFound", Details: { Name: "testvmic29f2", Group: "kubevirt.io", Kind: "virtualmachineinstances", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } virtualmachineinstances.kubevirt.io "testvmic29f2" not found /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:150 ------------------------------ STEP: getting an VMI STEP: Invoking virtctl start STEP: Getting the status of the VMI • Failure [360.057 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 Timed out after 360.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:453 ------------------------------ STEP: getting an VMI STEP: Invoking virtctl stop STEP: Ensuring VMI is running • Failure [360.075 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once [It] /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 Timed out after 360.000s. Expected : false to be true /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:480 ------------------------------ STEP: Starting the VirtualMachineInstance • Failure [360.082 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 Timed out after 180.004s. Expected : false to equal : true /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 ------------------------------ panic: test timed out after 1h30m0s goroutine 10311 [running]: testing.(*M).startAlarm.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1240 +0xfc created by time.goFunc /gimme/.gimme/versions/go1.10.linux.amd64/src/time/sleep.go:172 +0x44 goroutine 1 [chan receive, 90 minutes]: testing.(*T).Run(0xc4203c4d20, 0x124d089, 0x9, 0x12d7348, 0x47fa16) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:825 +0x301 testing.runTests.func1(0xc4203c4c30) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1063 +0x64 testing.tRunner(0xc4203c4c30, 0xc420663df8) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 testing.runTests(0xc420844360, 0x1b16e00, 0x1, 0x1, 0x41221d) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:1061 +0x2c4 testing.(*M).Run(0xc420722100, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:978 +0x171 main.main() _testmain.go:44 +0x151 goroutine 20 [chan receive]: kubevirt.io/kubevirt/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x1b3db80) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:879 +0x8b created by kubevirt.io/kubevirt/vendor/github.com/golang/glog.init.0 /root/go/src/kubevirt.io/kubevirt/vendor/github.com/golang/glog/glog.go:410 +0x203 goroutine 21 [syscall, 90 minutes]: os/signal.signal_recv(0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/sigqueue.go:139 +0xa6 os/signal.loop() /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:22 +0x22 created by os/signal.init.0 /gimme/.gimme/versions/go1.10.linux.amd64/src/os/signal/signal_unix.go:28 +0x41 goroutine 6 [sleep]: time.Sleep(0xb0de3b6) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/time.go:102 +0x166 kubevirt.io/kubevirt/vendor/k8s.io/client-go/util/flowcontrol.realClock.Sleep(0xb0de3b6) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/client-go/util/flowcontrol/throttle.go:66 +0x2b kubevirt.io/kubevirt/vendor/k8s.io/client-go/util/flowcontrol.(*tokenBucketRateLimiter).Accept(0xc420694220) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/client-go/util/flowcontrol/throttle.go:91 +0xbd kubevirt.io/kubevirt/vendor/k8s.io/client-go/rest.(*Request).tryThrottle(0xc42071c600) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/client-go/rest/request.go:478 +0x1fd kubevirt.io/kubevirt/vendor/k8s.io/client-go/rest.(*Request).Do(0xc42071c600, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/k8s.io/client-go/rest/request.go:733 +0x62 kubevirt.io/kubevirt/pkg/kubecli.(*vmis).Get(0xc4209a1b60, 0xc4208198d0, 0xc, 0xc420ac5ec0, 0xc4209a1b60, 0x8, 0x7ff8d9df2000) /root/go/src/kubevirt.io/kubevirt/pkg/kubecli/vmi.go:317 +0x125 kubevirt.io/kubevirt/tests.waitForVMIStart.func1(0x0) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1017 +0xc2 reflect.Value.call(0x1074e00, 0xc420727680, 0x13, 0x12475ad, 0x4, 0xc4207d2f28, 0x0, 0x0, 0x1074e00, 0x1074e00, ...) /gimme/.gimme/versions/go1.10.linux.amd64/src/reflect/value.go:447 +0x969 reflect.Value.Call(0x1074e00, 0xc420727680, 0x13, 0xc4207d2f28, 0x0, 0x0, 0x44b21b, 0xc420996e18, 0xc4207d2f60) /gimme/.gimme/versions/go1.10.linux.amd64/src/reflect/value.go:308 +0xa4 kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).pollActual(0xc420788a40, 0x0, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:71 +0x9f kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).match(0xc420788a40, 0x135c820, 0xc420965500, 0x412801, 0x0, 0x0, 0x0, 0xc420965500) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:141 +0x305 kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).Should(0xc420788a40, 0x135c820, 0xc420965500, 0x0, 0x0, 0x0, 0xc420788a40) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:48 +0x62 kubevirt.io/kubevirt/tests.waitForVMIStart(0x1353780, 0xc420524240, 0x3c, 0x0, 0x0, 0x1b5c101) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1027 +0x668 kubevirt.io/kubevirt/tests.WaitForSuccessfulVMIStartWithTimeout(0x1353780, 0xc420524240, 0x3c, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1037 +0x44 kubevirt.io/kubevirt/tests_test.glob..func5.5.1.1() /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:120 +0x1ac kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc420529c80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:109 +0x9c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc420529c80, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:63 +0x13e kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc4201791c0, 0x1350780, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:25 +0x7f kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc4206df040, 0x0, 0x1350780, 0xc4200ccf60) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:176 +0x5a6 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc4206df040, 0x1350780, 0xc4200ccf60) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:127 +0xe3 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc420672a00, 0xc4206df040, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:198 +0x10d kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc420672a00, 0x12d8001) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:168 +0x32c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc420672a00, 0xb) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:64 +0xdc kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc4200d8370, 0x7ff8d9d47500, 0xc4203c4d20, 0x124f44f, 0xb, 0xc4208443a0, 0x2, 0x2, 0x136a6a0, 0xc4200ccf60, ...) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x27c kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x13514c0, 0xc4203c4d20, 0x124f44f, 0xb, 0xc420844380, 0x2, 0x2, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:218 +0x253 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x13514c0, 0xc4203c4d20, 0x124f44f, 0xb, 0xc42044f0a0, 0x1, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:206 +0x129 kubevirt.io/kubevirt/tests_test.TestTests(0xc4203c4d20) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:42 +0xaa testing.tRunner(0xc4203c4d20, 0x12d7348) /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.linux.amd64/src/testing/testing.go:824 +0x2e0 goroutine 7 [chan receive, 90 minutes]: kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).registerForInterrupts(0xc420672a00) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:220 +0xc0 created by kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:59 +0x60 goroutine 25 [select, 90 minutes, locked to thread]: runtime.gopark(0x12d90d8, 0x0, 0x1249e51, 0x6, 0x18, 0x1) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/proc.go:291 +0x11a runtime.selectgo(0xc42010d750, 0xc4205ac060) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/select.go:392 +0xe50 runtime.ensureSigM.func1() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/signal_unix.go:549 +0x1f4 runtime.goexit() /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/asm_amd64.s:2361 +0x1 goroutine 38 [IO wait]: internal/poll.runtime_pollWait(0x7ff8d9d96ea0, 0x72, 0xc4205b3850) /gimme/.gimme/versions/go1.10.linux.amd64/src/runtime/netpoll.go:173 +0x57 internal/poll.(*pollDesc).wait(0xc420317e18, 0x72, 0xffffffffffffff00, 0x1352460, 0x1a2e638) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:85 +0x9b internal/poll.(*pollDesc).waitRead(0xc420317e18, 0xc4207be000, 0x2000, 0x2000) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_poll_runtime.go:90 +0x3d internal/poll.(*FD).Read(0xc420317e00, 0xc4207be000, 0x2000, 0x2000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/internal/poll/fd_unix.go:157 +0x17d net.(*netFD).Read(0xc420317e00, 0xc4207be000, 0x2000, 0x2000, 0x0, 0x8, 0x1ffb) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/fd_unix.go:202 +0x4f net.(*conn).Read(0xc4200d6910, 0xc4207be000, 0x2000, 0x2000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/net/net.go:176 +0x6a crypto/tls.(*block).readFromUntil(0xc4207fa030, 0x7ff8d9c72000, 0xc4200d6910, 0x5, 0xc4200d6910, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:493 +0x96 crypto/tls.(*Conn).readRecord(0xc42041d880, 0x12d9217, 0xc42041d9a0, 0x20) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:595 +0xe0 crypto/tls.(*Conn).Read(0xc42041d880, 0xc4205f4000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /gimme/.gimme/versions/go1.10.linux.amd64/src/crypto/tls/conn.go:1156 +0x100 bufio.(*Reader).Read(0xc420528540, 0xc420132118, 0x9, 0x9, 0xc4209a1a98, 0xc420875020, 0xc4205b3d10) /gimme/.gimme/versions/go1.10.linux.amd64/src/bufio/bufio.go:216 +0x238 io.ReadAtLeast(0x134f440, 0xc420528540, 0xc420132118, 0x9, 0x9, 0x9, 0xc4205b3ce0, 0xc4205b3ce0, 0x406614) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:309 +0x86 io.ReadFull(0x134f440, 0xc420528540, 0xc420132118, 0x9, 0x9, 0xc4209a1a40, 0xc4205b3d10, 0xc400004c01) /gimme/.gimme/versions/go1.10.linux.amd64/src/io/io.go:327 +0x58 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.readFrameHeader(0xc420132118, 0x9, 0x9, 0x134f440, 0xc420528540, 0x0, 0xc400000000, 0x874b0d, 0xc4205b3fb0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:237 +0x7b kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc4201320e0, 0xc420629620, 0x0, 0x0, 0x0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/frame.go:492 +0xa4 kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc4205b3fb0, 0x12d8210, 0xc420113fb0) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1428 +0x8e kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc4203d8000) /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:1354 +0x76 created by kubevirt.io/kubevirt/vendor/golang.org/x/net/http2.(*Transport).newClientConn /root/go/src/kubevirt.io/kubevirt/vendor/golang.org/x/net/http2/transport.go:579 +0x651 make: *** [functest] Error 2 + make cluster-down ./cluster/down.sh