+ export WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + WORKSPACE=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release + [[ k8s-1.10.3-release =~ openshift-.* ]] + [[ k8s-1.10.3-release =~ .*-1.9.3-.* ]] + export KUBEVIRT_PROVIDER=k8s-1.10.3 + KUBEVIRT_PROVIDER=k8s-1.10.3 + export KUBEVIRT_NUM_NODES=2 + KUBEVIRT_NUM_NODES=2 + export NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + NFS_WINDOWS_DIR=/home/nfs/images/windows2016 + export NAMESPACE=kube-system + NAMESPACE=kube-system + trap '{ make cluster-down; }' EXIT SIGINT SIGTERM SIGSTOP + make cluster-down ./cluster/down.sh + make cluster-up ./cluster/up.sh Downloading ....... Downloading ....... Downloading ....... 2018/07/19 12:34:24 Waiting for host: 192.168.66.101:22 2018/07/19 12:34:27 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/19 12:34:35 Problem with dial: dial tcp 192.168.66.101:22: getsockopt: no route to host. Sleeping 5s 2018/07/19 12:34:40 Connected to tcp://192.168.66.101:22 + kubeadm init --config /etc/kubernetes/kubeadm.conf [init] Using Kubernetes version: v1.10.3 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [node01] and IPs [192.168.66.101] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 30.007770 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node node01 as master by adding a label and a taint [markmaster] Master node01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: abcdef.1234567890123456 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.66.101:6443 --token abcdef.1234567890123456 --discovery-token-ca-cert-hash sha256:865e97753f4be507099b79eb836360248cf7b87cea3c3135b6fd6ee0c768798c + kubectl --kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created + kubectl --kubeconfig=/etc/kubernetes/admin.conf taint nodes node01 node-role.kubernetes.io/master:NoSchedule- node "node01" untainted 2018/07/19 12:35:27 Waiting for host: 192.168.66.102:22 2018/07/19 12:35:30 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/19 12:35:38 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: no route to host. Sleeping 5s 2018/07/19 12:35:43 Problem with dial: dial tcp 192.168.66.102:22: getsockopt: connection refused. Sleeping 5s 2018/07/19 12:35:48 Connected to tcp://192.168.66.102:22 + kubeadm join --token abcdef.1234567890123456 192.168.66.101:6443 --ignore-preflight-errors=all --discovery-token-unsafe-skip-ca-verification=true [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "192.168.66.101:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.66.101:6443" [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.66.101:6443" [discovery] Successfully established connection with API Server "192.168.66.101:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. Sending file modes: C0755 39588992 kubectl Sending file modes: C0600 5450 admin.conf Cluster "kubernetes" set. Cluster "kubernetes" set. + set +e + kubectl get nodes --no-headers + cluster/kubectl.sh get nodes --no-headers node01 Ready master 1m v1.10.3 node02 Ready 35s v1.10.3 + kubectl_rc=0 + '[' 0 -ne 0 ']' ++ grep NotReady ++ kubectl get nodes --no-headers ++ cluster/kubectl.sh get nodes --no-headers + '[' -n '' ']' + set -e + echo 'Nodes are ready:' Nodes are ready: + kubectl get nodes + cluster/kubectl.sh get nodes NAME STATUS ROLES AGE VERSION node01 Ready master 1m v1.10.3 node02 Ready 36s v1.10.3 + make cluster-sync ./cluster/build.sh Building ... Untagged: localhost:32928/kubevirt/virt-controller:devel Untagged: localhost:32928/kubevirt/virt-controller@sha256:00d3ce6d2abfa2735682f8826838c81b65b71800afcec0a92f9a23cc64d802e5 Deleted: sha256:df3b7319f7457ce66e35cea49d44b20d671a40c9738336dda72c4f1cb020d3ee Deleted: sha256:1fc58d243e11cb13469567f46e2e9c5c6b608258be0ed04cb34eced3329856db Deleted: sha256:ba603b309ea40a829c6b15d71ce1dc05adc3ec8d3951012b61499a95ef1bbf36 Deleted: sha256:54f48e5b61b44a6f525148e314e4d8a1575525a172e6ad4d6986c8b69b7df311 Untagged: localhost:32928/kubevirt/virt-launcher:devel Untagged: localhost:32928/kubevirt/virt-launcher@sha256:64eb007a7955ff08120429970aa3f1677bd28608d5131f3f4eee5f589c55b471 Deleted: sha256:d97eceb825d9ca6bc396be43026b984f8ab4ce840ad532871ea64ae4c9eec9d6 Deleted: sha256:4085941fbde6ee1cb3e7408ac3b0ad5ef835dbc7f3200d8452f340355cf47ffa Deleted: sha256:3081442aa9c120bde9006d2c5f8739126b81f40a807c0b3ff64a1a6b4f2f6254 Deleted: sha256:5ffc55b4de2918d130fe0569814228d707780b7bf322b6a4688a20f13603f84c Deleted: sha256:68d4bb05f0de37b0eff576ff2042ca13b6f3aa830e8f7ffa91ed9608ea4744dc Deleted: sha256:4ac7f14d4c45ea970e30455d72a4f29d9246193008ff9aa726ef184aba6f5fa4 Deleted: sha256:788c5746bcbb160096cf5826a52da28185e5ff487829619b7b6435ab4ec61afd Deleted: sha256:d27f3907c30f582e236784d35375c40f9aa591ae58d728a50aa393f6e6c0a22f Deleted: sha256:59ffc8bbdbb71abb7156111ca1124562d537d4061bf875f3b7c192cf84a5526d Deleted: sha256:08a1b51f2e203c5223c49f441d8ad6b58c0c30cb38fb9e9cf208e4613587fdc3 Deleted: sha256:df249829363ae7eb4c13bd80b96a77f14ec22bafc151e40b8cc404a7d43d5747 Deleted: sha256:58bb35d32dcae2afb85251613a5d6b21e076f0fbf645f1b2397e287a9978220d Untagged: localhost:32928/kubevirt/virt-handler:devel Untagged: localhost:32928/kubevirt/virt-handler@sha256:923920f34c99c383f3c7dc476fc65946cd5e0833c80201a2b36e54ae25165e93 Deleted: sha256:7a89574e8eb248b93eab4f4c7f47aec07f87cacd690c483f67a11b43c66fd8a2 Deleted: sha256:41736c005a1987cb179802654c394c18d84b576212816325356aebc9f7e103ba Deleted: sha256:a8d484a0bf69f6e825a04ddbff7e233e8f1e606daa38a9a9f1104b309a91dd5a Deleted: sha256:9e96c7eced1fb92cd2e9589c124b89462beaed232642ada2db9e31df8e2e1fe3 Untagged: localhost:32928/kubevirt/virt-api:devel Untagged: localhost:32928/kubevirt/virt-api@sha256:4007c199db0cecfab4bf17caedd987e471c1e07780311c25a7c983fe51b7a238 Deleted: sha256:55ef37115c734ec74d81f403d0383e7f1d9afa9d489f6ddf9edd5f614f4333b2 Deleted: sha256:7c2b75d6437add4abd93f822c8bc42cc4e18423b7935d3d5cf716b08f97b5ba9 Deleted: sha256:082ff7761c71087d2429cb170670748a4da20016c98cfea7cc369d72fe497504 Deleted: sha256:48638466716c4be5a5cc3ab7fee0a0cf644b7087154e5b2b90e75e0ffa0d2291 Untagged: localhost:32928/kubevirt/subresource-access-test:devel Untagged: localhost:32928/kubevirt/subresource-access-test@sha256:83b2821f286ff0f456f8bebbb7af72e4c6540a3e214e6866ad75a0fec00b4232 Deleted: sha256:59b7cf49bbc6ebd20aa8bdcdfcf352efc926ad5f1fc3ef176a3ecd4df6073c7c Deleted: sha256:82be582c76cc5b63c385d8f17186f66a75a0f3e14cd60abc941b6662db7b6a10 Deleted: sha256:8459b34b9cdb9fe8ed5ca1eee6f1dd3d82fb9cdee215b828123a6060b875793f Deleted: sha256:7ad5e0f06c777d3fe8aaf2bba91d7756373dcdae80e3e5cbcf07f148582d6cc8 Untagged: localhost:32928/kubevirt/example-hook-sidecar:devel Untagged: localhost:32928/kubevirt/example-hook-sidecar@sha256:f1e6456aa933bf1140e3827aa0324c6cb5af83a5479682a35ed338832a261263 Deleted: sha256:9aa1082635aaf5799d6f07dfd0b35bbea2f67404f0a2bfc08b5ae501732d7ad2 Deleted: sha256:47c6bbccc429a1f5a6d57ece022402d32c2230fb81264c85c4e422edbc920562 Deleted: sha256:c513ce0032ec44c57537793aabd3b4526f362748299a273de024a916ff860c23 Deleted: sha256:dbf18087401b376e347d7b5a09fd04c7bd902903fccb9a70b22bf809af4e7a85 Sending build context to Docker daemon 6.144 kB Step 1/20 : FROM fedora:28 ---> cc510acfcd70 Step 2/20 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> f05ad942adf5 Step 3/20 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 7a428c100527 Step 4/20 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> daf694ef7526 Step 5/20 : ENV GIMME_GO_VERSION 1.10.3 ---> Using cache ---> b414afcd470a Step 6/20 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Running in 714bfbc6a5d5  ---> dcc79830944e Removing intermediate container 714bfbc6a5d5 Step 7/20 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Running in a45868316e46 ---> a18c48ac0c59 Removing intermediate container a45868316e46 Step 8/20 : ENV KUBEVIRT_VERSION ${KUBEVIRT_VERSION} ---> Running in a33d53896803 ---> 42dbdcaf7481 Removing intermediate container a33d53896803 Step 9/20 : ENV KUBEVIRT_PROVIDER ${KUBEVIRT_PROVIDER} ---> Running in 6ab070490c22 ---> 07d999d0f0fc Removing intermediate container 6ab070490c22 Step 10/20 : ENV DOCKER_PREFIX ${DOCKER_PREFIX} ---> Running in f09bca943023 ---> 0b29e38eed47 Removing intermediate container f09bca943023 Step 11/20 : ENV DOCKER_TAG ${DOCKER_TAG} ---> Running in 975bbba0bb23 ---> 1470804531a9 Removing intermediate container 975bbba0bb23 Step 12/20 : ENV IMAGE_PULL_POLICY ${IMAGE_PULL_POLICY} ---> Running in 55d77bdc0aad ---> 5475c0d40362 Removing intermediate container 55d77bdc0aad Step 13/20 : ENV TRAVIS_JOB_ID ${TRAVIS_JOB_ID} ---> Running in e7e1795f21bd ---> d365fc260060 Removing intermediate container e7e1795f21bd Step 14/20 : ENV TRAVIS_PULL_REQUEST ${TRAVIS_PULL_REQUEST} ---> Running in 9b5226d1aa04 ---> b50c45d09ff1 Removing intermediate container 9b5226d1aa04 Step 15/20 : ENV TRAVIS_BRANCH ${TRAVIS_BRANCH} ---> Running in d2baf011a840 ---> 97b1e4682cb7 Removing intermediate container d2baf011a840 Step 16/20 : ADD rsyncd.conf /etc/rsyncd.conf ---> 9c64847c9f12 Removing intermediate container c5e614ef1c16 Step 17/20 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Running in 333732e36650  go version go1.10.3 linux/amd64 Cloning into '/go/src/mvdan.cc/sh'... Note: checking out 'v2.5.0'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b HEAD is now at 5f66499 all: bump to 2.5.0 Switched to a new branch 'release-1.9' Branch 'release-1.9' set up to track remote branch 'release-1.9' from 'origin'. Already on 'release-1.9' Your branch is up to date with 'origin/release-1.9'. Already on 'release-1.9' Your branch is up to date with 'origin/release-1.9'. Note: checking out '1643683e1b54a9e88ad26d98f81400c8c9d9f4f9'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b HEAD is now at 1643683 Add godoc badge (#444)  ---> 09782763843b Removing intermediate container 333732e36650 Step 18/20 : RUN pip install j2cli ---> Running in b800eb952891 WARNING: Running pip install with root privileges is generally not a good idea. Try `pip install --user` instead. Collecting j2cli Downloading https://files.pythonhosted.org/packages/6a/fb/c67a5da25bc7f5fd840727ea742748df981ee425350cc33d57ed7e2cc78d/j2cli-0.3.1_0-py2-none-any.whl Collecting jinja2>=2.7.2 (from j2cli) Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB) Collecting MarkupSafe>=0.23 (from jinja2>=2.7.2->j2cli) Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz Installing collected packages: MarkupSafe, jinja2, j2cli Running setup.py install for MarkupSafe: started Running setup.py install for MarkupSafe: finished with status 'done' Successfully installed MarkupSafe-1.0 j2cli-0.3.1-0 jinja2-2.10 ---> 54de968a5f4b Removing intermediate container b800eb952891 Step 19/20 : ADD entrypoint.sh /entrypoint.sh ---> cd8c7acd29e2 Removing intermediate container 848caca261f8 Step 20/20 : ENTRYPOINT /entrypoint.sh ---> Running in edd0cb05360c ---> af4a8c32b393 Removing intermediate container edd0cb05360c Successfully built af4a8c32b393 go version go1.10.3 linux/amd64 Waiting for rsyncd to be ready go version go1.10.3 linux/amd64 make[1]: Entering directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' hack/dockerized "./hack/check.sh && ./hack/build-go.sh install " && ./hack/build-copy-artifacts.sh Sending build context to Docker daemon 6.144 kB Step 1/20 : FROM fedora:28 ---> cc510acfcd70 Step 2/20 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> f05ad942adf5 Step 3/20 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 7a428c100527 Step 4/20 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> daf694ef7526 Step 5/20 : ENV GIMME_GO_VERSION 1.10.3 ---> Using cache ---> b414afcd470a Step 6/20 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> dcc79830944e Step 7/20 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> a18c48ac0c59 Step 8/20 : ENV KUBEVIRT_VERSION ${KUBEVIRT_VERSION} ---> Using cache ---> 42dbdcaf7481 Step 9/20 : ENV KUBEVIRT_PROVIDER ${KUBEVIRT_PROVIDER} ---> Using cache ---> 07d999d0f0fc Step 10/20 : ENV DOCKER_PREFIX ${DOCKER_PREFIX} ---> Using cache ---> 0b29e38eed47 Step 11/20 : ENV DOCKER_TAG ${DOCKER_TAG} ---> Using cache ---> 1470804531a9 Step 12/20 : ENV IMAGE_PULL_POLICY ${IMAGE_PULL_POLICY} ---> Using cache ---> 5475c0d40362 Step 13/20 : ENV TRAVIS_JOB_ID ${TRAVIS_JOB_ID} ---> Using cache ---> d365fc260060 Step 14/20 : ENV TRAVIS_PULL_REQUEST ${TRAVIS_PULL_REQUEST} ---> Using cache ---> b50c45d09ff1 Step 15/20 : ENV TRAVIS_BRANCH ${TRAVIS_BRANCH} ---> Using cache ---> 97b1e4682cb7 Step 16/20 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 9c64847c9f12 Step 17/20 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 09782763843b Step 18/20 : RUN pip install j2cli ---> Using cache ---> 54de968a5f4b Step 19/20 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> cd8c7acd29e2 Step 20/20 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> af4a8c32b393 Successfully built af4a8c32b393 go version go1.10.3 linux/amd64 go version go1.10.3 linux/amd64 find: '/root/go/src/kubevirt.io/kubevirt/_out/cmd': No such file or directory Compiling tests... compiled tests.test hack/build-docker.sh build Sending build context to Docker daemon 38.81 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-controller ---> Using cache ---> 1ac62e99a9e7 Step 4/8 : WORKDIR /home/virt-controller ---> Using cache ---> c7b69424a0c5 Step 5/8 : USER 1001 ---> Using cache ---> e60ed5d8e78a Step 6/8 : COPY virt-controller /usr/bin/virt-controller ---> f4e4e8ce0bb0 Removing intermediate container aba006809665 Step 7/8 : ENTRYPOINT /usr/bin/virt-controller ---> Running in 2e836a04c11c ---> 8b2ab541c665 Removing intermediate container 2e836a04c11c Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-controller" '' ---> Running in 34b27c67c148 ---> 61c141eea2bc Removing intermediate container 34b27c67c148 Successfully built 61c141eea2bc Sending build context to Docker daemon 41.02 MB Step 1/10 : FROM kubevirt/libvirt:4.2.0 ---> 5f0bfe81a3e0 Step 2/10 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 65f548d54a2e Step 3/10 : RUN dnf -y install socat genisoimage util-linux libcgroup-tools ethtool net-tools sudo && dnf -y clean all && test $(id -u qemu) = 107 # make sure that the qemu user really is 107 ---> Using cache ---> 04ae26de19c4 Step 4/10 : COPY virt-launcher /usr/bin/virt-launcher ---> 2cc921f321e7 Removing intermediate container fa922cfaa920 Step 5/10 : COPY kubevirt-sudo /etc/sudoers.d/kubevirt ---> 2868a96b136a Removing intermediate container 31c2bdf11daa Step 6/10 : RUN setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/qemu-system-x86_64 ---> Running in a939d81bab4f  ---> 02d8c5186dfa Removing intermediate container a939d81bab4f Step 7/10 : RUN mkdir -p /usr/share/kubevirt/virt-launcher ---> Running in 74d64c4d4561  ---> 1034c3e5946f Removing intermediate container 74d64c4d4561 Step 8/10 : COPY entrypoint.sh libvirtd.sh sock-connector /usr/share/kubevirt/virt-launcher/ ---> 73bdb2ed0967 Removing intermediate container 00f8917a9814 Step 9/10 : ENTRYPOINT /usr/share/kubevirt/virt-launcher/entrypoint.sh ---> Running in 54531c184c2d ---> f74ffad9d3a9 Removing intermediate container 54531c184c2d Step 10/10 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-launcher" '' ---> Running in bcd43a4fa851 ---> 45b5fcfb9420 Removing intermediate container bcd43a4fa851 Successfully built 45b5fcfb9420 Sending build context to Docker daemon 40.1 MB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/5 : COPY virt-handler /usr/bin/virt-handler ---> 8127d6d315f2 Removing intermediate container c1e21ebcb746 Step 4/5 : ENTRYPOINT /usr/bin/virt-handler ---> Running in 9db72b41943c ---> d6b18e1a45b2 Removing intermediate container 9db72b41943c Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-handler" '' ---> Running in 9254732a51d8 ---> 49f5a444c446 Removing intermediate container 9254732a51d8 Successfully built 49f5a444c446 Sending build context to Docker daemon 37.02 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virt-api ---> Using cache ---> 830d77e8a3bb Step 4/8 : WORKDIR /home/virt-api ---> Using cache ---> 7075b0c3cdfd Step 5/8 : USER 1001 ---> Using cache ---> 4e21374fdc1d Step 6/8 : COPY virt-api /usr/bin/virt-api ---> 373598748a2e Removing intermediate container 12a321252cac Step 7/8 : ENTRYPOINT /usr/bin/virt-api ---> Running in bcf2d0378ed4 ---> ca7cf1a7da03 Removing intermediate container bcf2d0378ed4 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "virt-api" '' ---> Running in 5651da021f38 ---> 9467242c07e8 Removing intermediate container 5651da021f38 Successfully built 9467242c07e8 Sending build context to Docker daemon 4.096 kB Step 1/7 : FROM fedora:28 ---> cc510acfcd70 Step 2/7 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/7 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/7 : RUN mkdir -p /images/custom /images/alpine && truncate -s 64M /images/custom/disk.img && curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /images/alpine/disk.img ---> Using cache ---> 3f571283fdaa Step 5/7 : ADD entrypoint.sh / ---> Using cache ---> 2722b024d103 Step 6/7 : CMD /entrypoint.sh ---> Using cache ---> 8458081a089b Step 7/7 : LABEL "disks-images-provider" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 95c52cb94d0f Successfully built 95c52cb94d0f Sending build context to Docker daemon 2.56 kB Step 1/5 : FROM fedora:28 ---> cc510acfcd70 Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/5 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/5 : RUN dnf -y install procps-ng nmap-ncat && dnf -y clean all ---> Using cache ---> 006e94a74def Step 5/5 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "vm-killer" '' ---> Using cache ---> b96459304131 Successfully built b96459304131 Sending build context to Docker daemon 5.12 kB Step 1/7 : FROM debian:sid ---> 496290160351 Step 2/7 : MAINTAINER "David Vossel" \ ---> Using cache ---> 081acc82039b Step 3/7 : ENV container docker ---> Using cache ---> 87a43203841c Step 4/7 : RUN apt-get update && apt-get install -y bash curl bzip2 qemu-utils && mkdir -p /disk && rm -rf /var/lib/apt/lists/* ---> Using cache ---> bbc83781e0a9 Step 5/7 : ADD entry-point.sh / ---> Using cache ---> c588d7a778a6 Step 6/7 : CMD /entry-point.sh ---> Using cache ---> e28b44b64988 Step 7/7 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "registry-disk-v1alpha" '' ---> Using cache ---> 15dee9c3f228 Successfully built 15dee9c3f228 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33268/kubevirt/registry-disk-v1alpha:devel ---> 15dee9c3f228 Step 2/4 : MAINTAINER "David Vossel" \ ---> Using cache ---> 59e724975b36 Step 3/4 : RUN curl https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img > /disk/cirros.img ---> Using cache ---> 5aab327c7d42 Step 4/4 : LABEL "cirros-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 6267f6181ea0 Successfully built 6267f6181ea0 Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33268/kubevirt/registry-disk-v1alpha:devel ---> 15dee9c3f228 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 7226abe32103 Step 3/4 : RUN curl -g -L https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2 > /disk/fedora.qcow2 ---> Using cache ---> e77a7d24125c Step 4/4 : LABEL "fedora-cloud-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 1f65ea7e845f Successfully built 1f65ea7e845f Sending build context to Docker daemon 2.56 kB Step 1/4 : FROM localhost:33268/kubevirt/registry-disk-v1alpha:devel ---> 15dee9c3f228 Step 2/4 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 7226abe32103 Step 3/4 : RUN curl http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso > /disk/alpine.iso ---> Using cache ---> 69497b9af146 Step 4/4 : LABEL "alpine-registry-disk-demo" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Using cache ---> 696b2b381ecc Successfully built 696b2b381ecc Sending build context to Docker daemon 34.04 MB Step 1/8 : FROM fedora:28 ---> cc510acfcd70 Step 2/8 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/8 : RUN useradd -u 1001 --create-home -s /bin/bash virtctl ---> Using cache ---> 939ec18dc9a4 Step 4/8 : WORKDIR /home/virtctl ---> Using cache ---> 52b6bf037d32 Step 5/8 : USER 1001 ---> Using cache ---> 1e1560e0af32 Step 6/8 : COPY subresource-access-test /subresource-access-test ---> 39cf52cff9dd Removing intermediate container 40c5cf1a4aec Step 7/8 : ENTRYPOINT /subresource-access-test ---> Running in e349c750da35 ---> 35a22b54615c Removing intermediate container e349c750da35 Step 8/8 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "subresource-access-test" '' ---> Running in 2f4e57d3fe85 ---> 5f01f3cb74b1 Removing intermediate container 2f4e57d3fe85 Successfully built 5f01f3cb74b1 Sending build context to Docker daemon 3.072 kB Step 1/9 : FROM fedora:28 ---> cc510acfcd70 Step 2/9 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 2405aa62579a Step 3/9 : ENV container docker ---> Using cache ---> 3370e25ee81a Step 4/9 : RUN dnf -y install make git gcc && dnf -y clean all ---> Using cache ---> 3129352c97b1 Step 5/9 : ENV GIMME_GO_VERSION 1.9.2 ---> Using cache ---> fbcd5a15f974 Step 6/9 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> 6e560dc836a0 Step 7/9 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> 8a916bbc2352 Step 8/9 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/masterzen/winrm-cli ---> Using cache ---> 72d00ac082db Step 9/9 : LABEL "kubevirt-functional-tests-k8s-1.10.3-release0" '' "winrmcli" '' ---> Using cache ---> a78ab99f56bf Successfully built a78ab99f56bf Sending build context to Docker daemon 35.17 MB Step 1/5 : FROM fedora:27 ---> 9110ae7f579f Step 2/5 : MAINTAINER "The KubeVirt Project" ---> Using cache ---> 0ae71e3c9e56 Step 3/5 : COPY example-hook-sidecar /example-hook-sidecar ---> 808458248c60 Removing intermediate container 8bc398a55261 Step 4/5 : ENTRYPOINT /example-hook-sidecar ---> Running in 8356e772f8ab ---> b80aa9ca3699 Removing intermediate container 8356e772f8ab Step 5/5 : LABEL "example-hook-sidecar" '' "kubevirt-functional-tests-k8s-1.10.3-release0" '' ---> Running in 055c3bc25917 ---> d920c443ea34 Removing intermediate container 055c3bc25917 Successfully built d920c443ea34 hack/build-docker.sh push The push refers to a repository [localhost:33268/kubevirt/virt-controller] 01ec8fd903be: Preparing d07058c760ad: Preparing 891e1e4ef82a: Preparing d07058c760ad: Pushed 01ec8fd903be: Pushed 891e1e4ef82a: Pushed devel: digest: sha256:962ee9f57398f471749a240caea3b40efd6790e3303e533d5eb250ae3a840489 size: 949 The push refers to a repository [localhost:33268/kubevirt/virt-launcher] 26284769bd2b: Preparing 98d60edc9797: Preparing 2130969ad87a: Preparing edd730cc037a: Preparing 5be7a0812b20: Preparing 53f12636d41e: Preparing da38cf808aa5: Preparing b83399358a92: Preparing 186d8b3e4fd8: Preparing fa6154170bf5: Preparing 5eefb9960a36: Preparing 53f12636d41e: Waiting 5be7a0812b20: Waiting 891e1e4ef82a: Preparing b83399358a92: Waiting 186d8b3e4fd8: Waiting da38cf808aa5: Waiting 5eefb9960a36: Waiting 891e1e4ef82a: Waiting edd730cc037a: Pushed 98d60edc9797: Pushed 26284769bd2b: Pushed da38cf808aa5: Pushed b83399358a92: Pushed 186d8b3e4fd8: Pushed fa6154170bf5: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-controller 2130969ad87a: Pushed 53f12636d41e: Pushed 5be7a0812b20: Pushed 5eefb9960a36: Pushed devel: digest: sha256:3301382be5ed22d0210ae21db348fdc233d842f1f6a601ea25ebc5def3c51a7f size: 2828 The push refers to a repository [localhost:33268/kubevirt/virt-handler] d2aa21b363d2: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-launcher d2aa21b363d2: Pushed devel: digest: sha256:0eb568e8fd52e5d9ed1d0aab29afcadeec8991bf1e30f441648e4d311b18e110 size: 741 The push refers to a repository [localhost:33268/kubevirt/virt-api] dda0ffe76ef3: Preparing 25755ffecaf3: Preparing 891e1e4ef82a: Preparing 25755ffecaf3: Pushed 891e1e4ef82a: Mounted from kubevirt/virt-handler dda0ffe76ef3: Pushed devel: digest: sha256:c14b3102047d4c13dd61248a21a63309c944bbb9bd2ac40c38650cec011ac2e2 size: 948 The push refers to a repository [localhost:33268/kubevirt/disks-images-provider] 5ffe52947a94: Preparing a1bc751fc8a2: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/virt-api 5ffe52947a94: Pushed a1bc751fc8a2: Pushed devel: digest: sha256:50586299b3a16885ba03d9bc2e7507a938cfffd1b7ac3b88d1a2391952d375e3 size: 948 The push refers to a repository [localhost:33268/kubevirt/vm-killer] 3a82b543c335: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/disks-images-provider 3a82b543c335: Pushed devel: digest: sha256:64a5fcfe7ef267c040173e7b886fa57506e89c93a3b104b77c105b456d5bf0b9 size: 740 The push refers to a repository [localhost:33268/kubevirt/registry-disk-v1alpha] cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing cb3d1019d03e: Pushed 626899eeec02: Pushed 132d61a890c5: Pushed devel: digest: sha256:a5bc478c406e6c5d542085f492938b5b148204595a77ad4d6a92403c8189b4e9 size: 948 The push refers to a repository [localhost:33268/kubevirt/cirros-registry-disk-demo] 64f73894f0f5: Preparing cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing 626899eeec02: Mounted from kubevirt/registry-disk-v1alpha 132d61a890c5: Mounted from kubevirt/registry-disk-v1alpha cb3d1019d03e: Mounted from kubevirt/registry-disk-v1alpha 64f73894f0f5: Pushed devel: digest: sha256:b258371daec3009f54b5def2587c2d368694eb3031103d6f050b9017499fdf95 size: 1160 The push refers to a repository [localhost:33268/kubevirt/fedora-cloud-registry-disk-demo] 007095a9be7a: Preparing cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing 132d61a890c5: Mounted from kubevirt/cirros-registry-disk-demo cb3d1019d03e: Mounted from kubevirt/cirros-registry-disk-demo 626899eeec02: Mounted from kubevirt/cirros-registry-disk-demo 007095a9be7a: Pushed devel: digest: sha256:b7bef9e164b128d63cd22e2dea2ff8896203e55affd9c385ce740bd13265fe33 size: 1161 The push refers to a repository [localhost:33268/kubevirt/alpine-registry-disk-demo] caaecc003aa5: Preparing cb3d1019d03e: Preparing 626899eeec02: Preparing 132d61a890c5: Preparing cb3d1019d03e: Mounted from kubevirt/fedora-cloud-registry-disk-demo 626899eeec02: Mounted from kubevirt/fedora-cloud-registry-disk-demo 132d61a890c5: Mounted from kubevirt/fedora-cloud-registry-disk-demo caaecc003aa5: Pushed devel: digest: sha256:c0bde2a98e583ab7bd5114474d35e23be5e27032d0d0cc57b19f0c96a79f6a32 size: 1160 The push refers to a repository [localhost:33268/kubevirt/subresource-access-test] f68d1779b006: Preparing 5c35b999e0e4: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/vm-killer 5c35b999e0e4: Pushed f68d1779b006: Pushed devel: digest: sha256:db3bed860169e7e454c7e14c42fe6b906c6afce9ce6753e59d23e075f582f352 size: 948 The push refers to a repository [localhost:33268/kubevirt/winrmcli] d8f4160f7568: Preparing b34315236250: Preparing b4a3c0429828: Preparing 891e1e4ef82a: Preparing 891e1e4ef82a: Mounted from kubevirt/subresource-access-test d8f4160f7568: Pushed b4a3c0429828: Pushed b34315236250: Pushed devel: digest: sha256:a78c39eb2015f025a4fb40135cab58ab9abe8d1527ce6536dc2836f455d93fda size: 1165 The push refers to a repository [localhost:33268/kubevirt/example-hook-sidecar] 9944a1f0bbfd: Preparing 39bae602f753: Preparing 9944a1f0bbfd: Pushed 39bae602f753: Pushed devel: digest: sha256:5f301ea3932dbde440a952df5f164d1618c6bc9205f90a8928b3ca96d046dd53 size: 740 make[1]: Leaving directory `/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt' Done ./cluster/clean.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-75-gbdce0dc ++ KUBEVIRT_VERSION=v0.7.0-75-gbdce0dc + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace image_pull_policy ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system +++ image_pull_policy=IfNotPresent ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33268/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace image_pull_policy + echo 'Cleaning up ...' Cleaning up ... + cluster/kubectl.sh get vmis --all-namespaces -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,FINALIZERS:.metadata.finalizers --no-headers + grep foregroundDeleteVirtualMachine + read p error: the server doesn't have a resource type "vmis" + _kubectl delete ds -l kubevirt.io -n kube-system --cascade=false --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=libvirt --force --grace-period 0 No resources found + _kubectl delete pods -n kube-system -l=kubevirt.io=virt-handler --force --grace-period 0 No resources found + namespaces=(default ${namespace}) + for i in '${namespaces[@]}' + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete deployment -l kubevirt.io No resources found + _kubectl -n default delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rs -l kubevirt.io No resources found + _kubectl -n default delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete services -l kubevirt.io No resources found + _kubectl -n default delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete apiservices -l kubevirt.io No resources found + _kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n default delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete secrets -l kubevirt.io No resources found + _kubectl -n default delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pv -l kubevirt.io No resources found + _kubectl -n default delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pvc -l kubevirt.io No resources found + _kubectl -n default delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete ds -l kubevirt.io No resources found + _kubectl -n default delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n default delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete pods -l kubevirt.io No resources found + _kubectl -n default delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n default delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete rolebinding -l kubevirt.io No resources found + _kubectl -n default delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete roles -l kubevirt.io No resources found + _kubectl -n default delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete clusterroles -l kubevirt.io No resources found + _kubectl -n default delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n default delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n default get crd offlinevirtualmachines.kubevirt.io ++ wc -l ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ cluster/k8s-1.10.3/.kubectl -n default get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + for i in '${namespaces[@]}' + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete deployment -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete deployment -l kubevirt.io No resources found + _kubectl -n kube-system delete rs -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rs -l kubevirt.io No resources found + _kubectl -n kube-system delete services -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete services -l kubevirt.io No resources found + _kubectl -n kube-system delete apiservices -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete apiservices -l kubevirt.io No resources found + _kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete validatingwebhookconfiguration -l kubevirt.io No resources found + _kubectl -n kube-system delete secrets -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete secrets -l kubevirt.io No resources found + _kubectl -n kube-system delete pv -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pv -l kubevirt.io No resources found + _kubectl -n kube-system delete pvc -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pvc -l kubevirt.io No resources found + _kubectl -n kube-system delete ds -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete ds -l kubevirt.io No resources found + _kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete customresourcedefinitions -l kubevirt.io No resources found + _kubectl -n kube-system delete pods -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete pods -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterrolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterrolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete rolebinding -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete rolebinding -l kubevirt.io No resources found + _kubectl -n kube-system delete roles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete roles -l kubevirt.io No resources found + _kubectl -n kube-system delete clusterroles -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete clusterroles -l kubevirt.io No resources found + _kubectl -n kube-system delete serviceaccounts -l kubevirt.io + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl -n kube-system delete serviceaccounts -l kubevirt.io No resources found ++ _kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io ++ export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig ++ wc -l ++ cluster/k8s-1.10.3/.kubectl -n kube-system get crd offlinevirtualmachines.kubevirt.io Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "offlinevirtualmachines.kubevirt.io" not found + '[' 0 -gt 0 ']' + sleep 2 + echo Done Done ./cluster/deploy.sh + source hack/common.sh ++++ dirname 'hack/common.sh[0]' +++ cd hack/../ +++ pwd ++ KUBEVIRT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt ++ OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out ++ VENDOR_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/vendor ++ CMD_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/cmd ++ TESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/tests ++ APIDOCS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/apidocs ++ MANIFESTS_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests ++ MANIFEST_TEMPLATES_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/templates/manifests ++ PYTHON_CLIENT_OUT_DIR=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/client-python ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_NUM_NODES=2 ++ '[' -z kubevirt-functional-tests-k8s-1.10.3-release ']' ++ provider_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 ++ job_prefix=kubevirt-functional-tests-k8s-1.10.3-release0 +++ kubevirt_version +++ '[' -n '' ']' +++ '[' -d /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/.git ']' ++++ git describe --always --tags +++ echo v0.7.0-75-gbdce0dc ++ KUBEVIRT_VERSION=v0.7.0-75-gbdce0dc + source cluster/k8s-1.10.3/provider.sh ++ set -e ++ image=k8s-1.10.3@sha256:d6290260e7e6b84419984f12719cf592ccbe327373b8df76aa0481f8ec01d357 ++ source cluster/ephemeral-provider-common.sh +++ set -e +++ _cli='docker run --privileged --net=host --rm -v /var/run/docker.sock:/var/run/docker.sock kubevirtci/gocli@sha256:aa7f295a7908fa333ab5e98ef3af0bfafbabfd3cee2b83f9af47f722e3000f6a' + source hack/config.sh ++ unset binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig manifest_docker_prefix namespace image_pull_policy ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ KUBEVIRT_PROVIDER=k8s-1.10.3 ++ source hack/config-default.sh source hack/config-k8s-1.10.3.sh +++ binaries='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virtctl cmd/fake-qemu-process cmd/virt-api cmd/subresource-access-test cmd/example-hook-sidecar' +++ docker_images='cmd/virt-controller cmd/virt-launcher cmd/virt-handler cmd/virt-api images/disks-images-provider images/vm-killer cmd/registry-disk-v1alpha images/cirros-registry-disk-demo images/fedora-cloud-registry-disk-demo images/alpine-registry-disk-demo cmd/subresource-access-test images/winrmcli cmd/example-hook-sidecar' +++ docker_prefix=kubevirt +++ docker_tag=latest +++ master_ip=192.168.200.2 +++ network_provider=flannel +++ namespace=kube-system +++ image_pull_policy=IfNotPresent ++ test -f hack/config-provider-k8s-1.10.3.sh ++ source hack/config-provider-k8s-1.10.3.sh +++ master_ip=127.0.0.1 +++ docker_tag=devel +++ kubeconfig=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubeconfig +++ kubectl=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/cluster/k8s-1.10.3/.kubectl +++ docker_prefix=localhost:33268/kubevirt +++ manifest_docker_prefix=registry:5000/kubevirt ++ test -f hack/config-local.sh ++ export binaries docker_images docker_prefix docker_tag manifest_templates master_ip network_provider kubeconfig namespace image_pull_policy + echo 'Deploying ...' Deploying ... + [[ -z k8s-1.10.3-release ]] + [[ k8s-1.10.3-release =~ .*-dev ]] + [[ k8s-1.10.3-release =~ .*-release ]] + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/demo-content.yaml =~ .*demo.* ]] + continue + for manifest in '${MANIFESTS_OUT_DIR}/release/*' + [[ /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml =~ .*demo.* ]] + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/release/kubevirt.yaml clusterrole.rbac.authorization.k8s.io "kubevirt.io:admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:edit" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:view" created serviceaccount "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-apiserver-auth-delegator" created rolebinding.rbac.authorization.k8s.io "kubevirt-apiserver" created role.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-apiserver" created clusterrole.rbac.authorization.k8s.io "kubevirt-controller" created serviceaccount "kubevirt-controller" created serviceaccount "kubevirt-privileged" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-controller-cluster-admin" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-privileged-cluster-admin" created clusterrole.rbac.authorization.k8s.io "kubevirt.io:default" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt.io:default" created service "virt-api" created deployment.extensions "virt-api" created deployment.extensions "virt-controller" created daemonset.extensions "virt-handler" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstances.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancereplicasets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachineinstancepresets.kubevirt.io" created customresourcedefinition.apiextensions.k8s.io "virtualmachines.kubevirt.io" created + _kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R + export KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + KUBECONFIG=cluster/k8s-1.10.3/.kubeconfig + cluster/k8s-1.10.3/.kubectl create -f /var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/go/src/kubevirt.io/kubevirt/_out/manifests/testing -R persistentvolumeclaim "disk-alpine" created persistentvolume "host-path-disk-alpine" created persistentvolumeclaim "disk-custom" created persistentvolume "host-path-disk-custom" created daemonset.extensions "disks-images-provider" created serviceaccount "kubevirt-testing" created clusterrolebinding.rbac.authorization.k8s.io "kubevirt-testing-cluster-admin" created + [[ k8s-1.10.3 =~ os-* ]] + echo Done Done + namespaces=(kube-system default) + [[ kube-system != \k\u\b\e\-\s\y\s\t\e\m ]] + timeout=300 + sample=30 + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n kube-system --no-headers ++ cluster/kubectl.sh get pods -n kube-system --no-headers ++ grep -v Running + '[' -n 'virt-api-7d79764579-crp5g 0/1 ContainerCreating 0 2s virt-api-7d79764579-zrsl6 0/1 ContainerCreating 0 2s virt-controller-7d57d96b65-dxp9q 0/1 ContainerCreating 0 2s virt-controller-7d57d96b65-gl4wj 0/1 ContainerCreating 0 2s virt-handler-l7drd 0/1 ContainerCreating 0 2s virt-handler-w9r4d 0/1 ContainerCreating 0 2s' ']' + echo 'Waiting for kubevirt pods to enter the Running state ...' Waiting for kubevirt pods to enter the Running state ... + kubectl get pods -n kube-system --no-headers + cluster/kubectl.sh get pods -n kube-system --no-headers + grep -v Running disks-images-provider-b9mbn 0/1 ContainerCreating 0 1s disks-images-provider-x47nx 0/1 ContainerCreating 0 1s virt-api-7d79764579-crp5g 0/1 ContainerCreating 0 3s virt-api-7d79764579-zrsl6 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-dxp9q 0/1 ContainerCreating 0 3s virt-controller-7d57d96b65-gl4wj 0/1 ContainerCreating 0 3s virt-handler-l7drd 0/1 ContainerCreating 0 3s virt-handler-w9r4d 0/1 ContainerCreating 0 3s + sleep 30 + current_time=30 + '[' 30 -gt 300 ']' ++ kubectl get pods -n kube-system --no-headers ++ grep -v Running ++ cluster/kubectl.sh get pods -n kube-system --no-headers + '[' -n '' ']' + current_time=0 ++ grep false ++ kubectl get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ cluster/kubectl.sh get pods -n kube-system '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n kube-system + cluster/kubectl.sh get pods -n kube-system NAME READY STATUS RESTARTS AGE disks-images-provider-b9mbn 1/1 Running 0 52s disks-images-provider-x47nx 1/1 Running 0 52s etcd-node01 1/1 Running 0 17m kube-apiserver-node01 1/1 Running 0 17m kube-controller-manager-node01 1/1 Running 0 17m kube-dns-86f4d74b45-smns9 3/3 Running 0 18m kube-flannel-ds-2ww89 1/1 Running 0 18m kube-flannel-ds-hlrvq 1/1 Running 0 18m kube-proxy-629vk 1/1 Running 0 18m kube-proxy-pm6gp 1/1 Running 0 18m kube-scheduler-node01 1/1 Running 0 17m virt-api-7d79764579-crp5g 1/1 Running 0 54s virt-api-7d79764579-zrsl6 1/1 Running 0 54s virt-controller-7d57d96b65-dxp9q 1/1 Running 0 54s virt-controller-7d57d96b65-gl4wj 1/1 Running 0 54s virt-handler-l7drd 1/1 Running 0 54s virt-handler-w9r4d 1/1 Running 0 54s + for i in '${namespaces[@]}' + current_time=0 ++ kubectl get pods -n default --no-headers ++ cluster/kubectl.sh get pods -n default --no-headers ++ grep -v Running No resources found. + '[' -n '' ']' + current_time=0 ++ kubectl get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers ++ grep false ++ cluster/kubectl.sh get pods -n default '-ocustom-columns=status:status.containerStatuses[*].ready' --no-headers + '[' -n '' ']' + kubectl get pods -n default + cluster/kubectl.sh get pods -n default No resources found. + kubectl version + cluster/kubectl.sh version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} + ginko_params='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + [[ -d /home/nfs/images/windows2016 ]] + [[ k8s-1.10.3-release =~ windows.* ]] + FUNC_TEST_ARGS='--ginkgo.noColor --junit-output=/var/lib/swarm/workspace/kubevirt-functional-tests-k8s-1.10.3-release/junit.xml' + make functest hack/dockerized "hack/build-func-tests.sh" Sending build context to Docker daemon 6.144 kB Step 1/20 : FROM fedora:28 ---> cc510acfcd70 Step 2/20 : ENV LIBVIRT_VERSION 4.2.0 ---> Using cache ---> f05ad942adf5 Step 3/20 : RUN curl --output /etc/yum.repos.d/fedora-virt-preview.repo https://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo ---> Using cache ---> 7a428c100527 Step 4/20 : RUN dnf -y install libvirt-devel-${LIBVIRT_VERSION} make git mercurial sudo gcc findutils gradle rsync-daemon rsync qemu-img protobuf-compiler && dnf -y clean all ---> Using cache ---> daf694ef7526 Step 5/20 : ENV GIMME_GO_VERSION 1.10.3 ---> Using cache ---> b414afcd470a Step 6/20 : RUN mkdir -p /gimme && curl -sL https://raw.githubusercontent.com/travis-ci/gimme/master/gimme | HOME=/gimme bash >> /etc/profile.d/gimme.sh ---> Using cache ---> dcc79830944e Step 7/20 : ENV GOPATH "/go" GOBIN "/usr/bin" ---> Using cache ---> a18c48ac0c59 Step 8/20 : ENV KUBEVIRT_VERSION ${KUBEVIRT_VERSION} ---> Using cache ---> 42dbdcaf7481 Step 9/20 : ENV KUBEVIRT_PROVIDER ${KUBEVIRT_PROVIDER} ---> Using cache ---> 07d999d0f0fc Step 10/20 : ENV DOCKER_PREFIX ${DOCKER_PREFIX} ---> Using cache ---> 0b29e38eed47 Step 11/20 : ENV DOCKER_TAG ${DOCKER_TAG} ---> Using cache ---> 1470804531a9 Step 12/20 : ENV IMAGE_PULL_POLICY ${IMAGE_PULL_POLICY} ---> Using cache ---> 5475c0d40362 Step 13/20 : ENV TRAVIS_JOB_ID ${TRAVIS_JOB_ID} ---> Using cache ---> d365fc260060 Step 14/20 : ENV TRAVIS_PULL_REQUEST ${TRAVIS_PULL_REQUEST} ---> Using cache ---> b50c45d09ff1 Step 15/20 : ENV TRAVIS_BRANCH ${TRAVIS_BRANCH} ---> Using cache ---> 97b1e4682cb7 Step 16/20 : ADD rsyncd.conf /etc/rsyncd.conf ---> Using cache ---> 9c64847c9f12 Step 17/20 : RUN mkdir -p /go && source /etc/profile.d/gimme.sh && go get github.com/mattn/goveralls && go get -u github.com/Masterminds/glide && go get golang.org/x/tools/cmd/goimports && git clone https://github.com/mvdan/sh.git $GOPATH/src/mvdan.cc/sh && cd /go/src/mvdan.cc/sh/cmd/shfmt && git checkout v2.5.0 && go get mvdan.cc/sh/cmd/shfmt && go install && go get -u github.com/golang/mock/gomock && go get -u github.com/rmohr/mock/mockgen && go get -u github.com/rmohr/go-swagger-utils/swagger-doc && go get -u github.com/onsi/ginkgo/ginkgo && go get -u -d k8s.io/code-generator/cmd/deepcopy-gen && go get -u -d k8s.io/code-generator/cmd/defaulter-gen && go get -u -d k8s.io/code-generator/cmd/openapi-gen && cd /go/src/k8s.io/code-generator/cmd/deepcopy-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/defaulter-gen && git checkout release-1.9 && go install && cd /go/src/k8s.io/code-generator/cmd/openapi-gen && git checkout release-1.9 && go install && go get -u -d github.com/golang/protobuf/protoc-gen-go && cd /go/src/github.com/golang/protobuf/protoc-gen-go && git checkout 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9 && go install ---> Using cache ---> 09782763843b Step 18/20 : RUN pip install j2cli ---> Using cache ---> 54de968a5f4b Step 19/20 : ADD entrypoint.sh /entrypoint.sh ---> Using cache ---> cd8c7acd29e2 Step 20/20 : ENTRYPOINT /entrypoint.sh ---> Using cache ---> af4a8c32b393 Successfully built af4a8c32b393 go version go1.10.3 linux/amd64 go version go1.10.3 linux/amd64 Compiling tests... compiled tests.test hack/functests.sh Running Suite: Tests Suite ========================== Random Seed: 1532005023 Will run 141 of 141 specs • [SLOW TEST:102.628 seconds] Slirp /root/go/src/kubevirt.io/kubevirt/tests/vmi_slirp_interface_test.go:39 should be able to /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 VirtualMachineInstance with slirp interface /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •Service cluster-ip-vm successfully exposed for virtualmachineinstance testvminw7gk ------------------------------ • [SLOW TEST:51.110 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:68 Should expose a Cluster IP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:71 ------------------------------ Service node-port-vm successfully exposed for virtualmachineinstance testvminw7gk • [SLOW TEST:8.333 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:61 Expose NodePort service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:98 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:103 ------------------------------ Service cluster-ip-udp-vm successfully exposed for virtualmachineinstance testvmiz2z7q • [SLOW TEST:49.295 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose ClusterIP UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:147 Should expose a ClusterIP service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:151 ------------------------------ Service node-port-udp-vm successfully exposed for virtualmachineinstance testvmiz2z7q • [SLOW TEST:9.463 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose UDP service on a VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:140 Expose NodePort UDP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:179 Should expose a NodePort service on a VM and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:184 ------------------------------ Service cluster-ip-vmrs successfully exposed for vmirs replicasetdg5q4 • [SLOW TEST:65.963 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on a VM replica set /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:227 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:260 Should create a ClusterIP service on VMRS and connect to it /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:264 ------------------------------ Service cluster-ip-ovm successfully exposed for virtualmachine testvmij7fzr VM testvmij7fzr was scheduled to start • [SLOW TEST:53.061 seconds] Expose /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:53 Expose service on an Offline VM /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:292 Expose ClusterIP service /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:336 Connect to ClusterIP services that was set when VM was offline /root/go/src/kubevirt.io/kubevirt/tests/expose_test.go:337 ------------------------------ • [SLOW TEST:39.511 seconds] LeaderElection /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:43 Start a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:53 when the controller pod is not running /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:54 should success /root/go/src/kubevirt.io/kubevirt/tests/controller_leader_election_test.go:55 ------------------------------ • [SLOW TEST:33.517 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:33.115 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:130.231 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with Disk PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:120.668 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 with Alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:71 should be successfully started and stopped multiple times /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 with CDRom PVC /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:50.073 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:113 should create a writeable emptyDisk with the right capacity /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:115 ------------------------------ • [SLOW TEST:51.909 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With an emptyDisk defined and a specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:163 should create a writeable emptyDisk with the specified serial number /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:165 ------------------------------ • [SLOW TEST:32.511 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should be successfully started /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:207 ------------------------------ • [SLOW TEST:80.188 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With ephemeral alpine PVC /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:205 should not persist data /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:218 ------------------------------ • [SLOW TEST:123.629 seconds] Storage /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:46 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:70 With VirtualMachineInstance with two PVCs /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:266 should start vmi multiple times /root/go/src/kubevirt.io/kubevirt/tests/storage_test.go:278 ------------------------------ • ------------------------------ • [SLOW TEST:6.908 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should scale /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 to five, to six and then to zero replicas /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •• ------------------------------ • [SLOW TEST:20.863 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should update readyReplicas once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:157 ------------------------------ • [SLOW TEST:8.612 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove VMIs once it is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:169 ------------------------------ • ------------------------------ • [SLOW TEST:5.547 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should not scale when paused and scale when resume /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:223 ------------------------------ • [SLOW TEST:5.651 seconds] VirtualMachineInstanceReplicaSet /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:46 should remove the finished VM /root/go/src/kubevirt.io/kubevirt/tests/replicaset_test.go:279 ------------------------------ • ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.014 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to start a vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:133 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.012 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 should succeed to stop a running vmi [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:139 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.018 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have correct UUID /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:192 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with winrm connection [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:150 should have pod IP /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:208 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.017 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to start a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:242 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.014 seconds] Windows VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:57 with kubectl command [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:226 should succeed to stop a vmi /root/go/src/kubevirt.io/kubevirt/tests/windows_test.go:250 Skip Windows tests that requires PVC disk-windows /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1342 ------------------------------ • [SLOW TEST:17.224 seconds] VNC /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:54 with VNC connection /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:62 should allow accessing the VNC device /root/go/src/kubevirt.io/kubevirt/tests/vnc_test.go:64 ------------------------------ •• ------------------------------ • [SLOW TEST:47.623 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 should have cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:82 ------------------------------ • [SLOW TEST:107.714 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userDataBase64 source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:81 with injected ssh-key /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:92 should have ssh-key under authorized keys /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:93 ------------------------------ • [SLOW TEST:60.630 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 with cloudInitNoCloud userData source /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:118 should process provided cloud-init data /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:119 ------------------------------ • [SLOW TEST:51.273 seconds] CloudInit UserData /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:46 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:80 should take user-data from k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_userdata_test.go:162 ------------------------------ Pod name: disks-images-provider-b9mbn Pod phase: Running copy all images to host mount directory Pod name: disks-images-provider-x47nx Pod phase: Running copy all images to host mount directory Pod name: virt-api-7d79764579-crp5g Pod phase: Running 2018/07/19 13:21:34 http: TLS handshake error from 10.244.0.1:53334: EOF 2018/07/19 13:21:44 http: TLS handshake error from 10.244.0.1:53358: EOF level=info timestamp=2018-07-19T13:21:49.858945Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 13:21:54 http: TLS handshake error from 10.244.0.1:53382: EOF 2018/07/19 13:22:04 http: TLS handshake error from 10.244.0.1:53406: EOF 2018/07/19 13:22:14 http: TLS handshake error from 10.244.0.1:53430: EOF level=info timestamp=2018-07-19T13:22:19.911919Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 13:22:24 http: TLS handshake error from 10.244.0.1:53454: EOF 2018/07/19 13:22:34 http: TLS handshake error from 10.244.0.1:53478: EOF 2018/07/19 13:22:44 http: TLS handshake error from 10.244.0.1:53502: EOF level=info timestamp=2018-07-19T13:22:49.898595Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 2018/07/19 13:22:54 http: TLS handshake error from 10.244.0.1:53526: EOF 2018/07/19 13:23:04 http: TLS handshake error from 10.244.0.1:53550: EOF 2018/07/19 13:23:14 http: TLS handshake error from 10.244.0.1:53574: EOF level=info timestamp=2018-07-19T13:23:19.849475Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.1 username=- method=GET url=/ proto=HTTP/2.0 statusCode=404 contentLength=19 Pod name: virt-api-7d79764579-zrsl6 Pod phase: Running level=info timestamp=2018-07-19T13:21:52.268984Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T13:21:52.807704Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 13:21:56 http: TLS handshake error from 10.244.1.1:60368: EOF 2018/07/19 13:22:06 http: TLS handshake error from 10.244.1.1:60374: EOF 2018/07/19 13:22:16 http: TLS handshake error from 10.244.1.1:60380: EOF level=info timestamp=2018-07-19T13:22:22.390350Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T13:22:22.924387Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 13:22:26 http: TLS handshake error from 10.244.1.1:60386: EOF 2018/07/19 13:22:36 http: TLS handshake error from 10.244.1.1:60392: EOF 2018/07/19 13:22:46 http: TLS handshake error from 10.244.1.1:60398: EOF level=info timestamp=2018-07-19T13:22:52.510429Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 level=info timestamp=2018-07-19T13:22:53.045286Z pos=filter.go:46 component=virt-api remoteAddress=10.244.0.0 username=- method=GET url=/apis/subresources.kubevirt.io/v1alpha2 proto=HTTP/2.0 statusCode=200 contentLength=136 2018/07/19 13:22:56 http: TLS handshake error from 10.244.1.1:60404: EOF 2018/07/19 13:23:06 http: TLS handshake error from 10.244.1.1:60410: EOF 2018/07/19 13:23:16 http: TLS handshake error from 10.244.1.1:60416: EOF Pod name: virt-controller-7d57d96b65-6rmx2 Pod phase: Running level=info timestamp=2018-07-19T13:02:50.571758Z pos=application.go:174 component=virt-controller service=http action=listening interface=0.0.0.0 port=8182 Pod name: virt-controller-7d57d96b65-gl4wj Pod phase: Running level=info timestamp=2018-07-19T13:15:18.751499Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmi9tmb9\": StorageError: invalid object, Code: 4, Key: /registry/kubevirt.io/virtualmachineinstances/kubevirt-test-default/testvmi9tmb9, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c8415e90-8b55-11e8-8a24-525500d15501, UID in object meta: " msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmi9tmb9" level=info timestamp=2018-07-19T13:15:18.957559Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmih99f8 kind= uid=c887b8bc-8b55-11e8-8a24-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T13:15:18.957708Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmih99f8 kind= uid=c887b8bc-8b55-11e8-8a24-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T13:15:36.533664Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmic446n kind= uid=d3020bdb-8b55-11e8-8a24-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T13:15:36.536475Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmic446n kind= uid=d3020bdb-8b55-11e8-8a24-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T13:15:36.613507Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmic446n\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmic446n" level=info timestamp=2018-07-19T13:16:24.079688Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmipl77h kind= uid=ef56d8d1-8b55-11e8-8a24-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T13:16:24.082856Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmipl77h kind= uid=ef56d8d1-8b55-11e8-8a24-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T13:18:11.842541Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikvwg7 kind= uid=2f8cfd0a-8b56-11e8-8a24-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T13:18:11.845798Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmikvwg7 kind= uid=2f8cfd0a-8b56-11e8-8a24-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T13:19:12.534318Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib96w7 kind= uid=53bcda68-8b56-11e8-8a24-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T13:19:12.537336Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmib96w7 kind= uid=53bcda68-8b56-11e8-8a24-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T13:20:03.758935Z pos=preset.go:139 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="Initializing VirtualMachineInstance" level=info timestamp=2018-07-19T13:20:03.762097Z pos=preset.go:165 component=virt-controller service=http namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="Marking VirtualMachineInstance as initialized" level=info timestamp=2018-07-19T13:20:04.012088Z pos=vmi.go:157 component=virt-controller service=http reason="Operation cannot be fulfilled on virtualmachineinstances.kubevirt.io \"testvmijc48g\": the object has been modified; please apply your changes to the latest version and try again" msg="reenqueuing VirtualMachineInstance kubevirt-test-default/testvmijc48g" Pod name: virt-handler-l7drd Pod phase: Running level=info timestamp=2018-07-19T13:14:52.559142Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind= uid=ae659c40-8b55-11e8-8a24-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T13:14:52.559730Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind= uid=ae659c40-8b55-11e8-8a24-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-19T13:14:52.568135Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind= uid=ae659c40-8b55-11e8-8a24-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T13:14:55.456801Z pos=vm.go:342 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind= uid=ae659c40-8b55-11e8-8a24-525500d15501 msg="Shutting down domain for VirtualMachineInstance with deletion timestamp." level=info timestamp=2018-07-19T13:14:55.459157Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind= uid=ae659c40-8b55-11e8-8a24-525500d15501 msg="Processing shutdown." level=info timestamp=2018-07-19T13:14:55.460724Z pos=vm.go:540 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind= uid=ae659c40-8b55-11e8-8a24-525500d15501 msg="grace period expired, killing deleted VirtualMachineInstance testvmint4r4v5dmr" level=info timestamp=2018-07-19T13:14:55.676813Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind= uid=ae659c40-8b55-11e8-8a24-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T13:14:55.676941Z pos=vm.go:330 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind=VirtualMachineInstance uid= msg="Shutting down domain for deleted VirtualMachineInstance object." level=info timestamp=2018-07-19T13:14:55.677019Z pos=vm.go:383 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind=VirtualMachineInstance uid= msg="Processing shutdown." level=info timestamp=2018-07-19T13:14:55.678457Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T13:14:55.679181Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T13:14:55.681216Z pos=vm.go:678 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind=Domain uid= msg="Domain deleted" level=info timestamp=2018-07-19T13:14:55.681373Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T13:14:55.681449Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmint4r4v5dmr kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T13:14:55.682261Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" Pod name: virt-handler-w9r4d Pod phase: Running level=info timestamp=2018-07-19T13:20:03.749856Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib96w7 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T13:20:03.750486Z pos=vm.go:386 component=virt-handler namespace=kubevirt-test-default name=testvmib96w7 kind=VirtualMachineInstance uid= msg="Processing local ephemeral data cleanup for shutdown domain." level=info timestamp=2018-07-19T13:20:03.750703Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmib96w7 kind=VirtualMachineInstance uid= msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T13:20:03.756804Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type DELETED" level=info timestamp=2018-07-19T13:20:18.777412Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-19T13:20:19.993489Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type ADDED" level=info timestamp=2018-07-19T13:20:19.993913Z pos=vm.go:657 component=virt-handler namespace=kubevirt-test-default name=testvmijc48g kind=Domain uid=72463774-8b56-11e8-8a24-525500d15501 msg="Domain is in state Paused reason Unknown" level=info timestamp=2018-07-19T13:20:20.355709Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T13:20:20.356237Z pos=vm.go:688 component=virt-handler namespace=kubevirt-test-default name=testvmijc48g kind=Domain uid=72463774-8b56-11e8-8a24-525500d15501 msg="Domain is in state Running reason Unknown" level=info timestamp=2018-07-19T13:20:20.530409Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T13:20:20.530548Z pos=vm.go:392 component=virt-handler namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="No update processing required" level=info timestamp=2018-07-19T13:20:20.533276Z pos=server.go:75 component=virt-handler msg="Received Domain Event of type MODIFIED" level=info timestamp=2018-07-19T13:20:20.559315Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="Synchronization loop succeeded." level=info timestamp=2018-07-19T13:20:20.559465Z pos=vm.go:389 component=virt-handler namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="Processing vmi update" level=info timestamp=2018-07-19T13:20:20.564265Z pos=vm.go:413 component=virt-handler namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="Synchronization loop succeeded." Pod name: virt-launcher-testvmijc48g-p9xhh Pod phase: Running level=info timestamp=2018-07-19T13:20:19.117769Z pos=client.go:136 component=virt-launcher msg="Libvirt event 0 with reason 0 received" level=info timestamp=2018-07-19T13:20:19.976640Z pos=client.go:119 component=virt-launcher msg="domain status: 3:11" level=info timestamp=2018-07-19T13:20:19.995063Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-19T13:20:20.030218Z pos=virt-launcher.go:215 component=virt-launcher msg="Detected domain with UUID 553d0bc8-a0a9-4263-b908-15226703803d" level=info timestamp=2018-07-19T13:20:20.030593Z pos=monitor.go:253 component=virt-launcher msg="Monitoring loop: rate 1s start timeout 5m0s" level=info timestamp=2018-07-19T13:20:20.283113Z pos=client.go:136 component=virt-launcher msg="Libvirt event 4 with reason 0 received" level=info timestamp=2018-07-19T13:20:20.349153Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-19T13:20:20.356832Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-19T13:20:20.356977Z pos=client.go:136 component=virt-launcher msg="Libvirt event 2 with reason 0 received" level=info timestamp=2018-07-19T13:20:20.516393Z pos=manager.go:188 component=virt-launcher namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="Domain started." level=info timestamp=2018-07-19T13:20:20.522704Z pos=client.go:119 component=virt-launcher msg="domain status: 1:1" level=info timestamp=2018-07-19T13:20:20.524152Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-19T13:20:20.534273Z pos=client.go:145 component=virt-launcher msg="processed event" level=info timestamp=2018-07-19T13:20:20.563010Z pos=server.go:74 component=virt-launcher namespace=kubevirt-test-default name=testvmijc48g kind= uid=72463774-8b56-11e8-8a24-525500d15501 msg="Synced vmi" level=info timestamp=2018-07-19T13:20:21.035056Z pos=monitor.go:222 component=virt-launcher msg="Found PID for 553d0bc8-a0a9-4263-b908-15226703803d: 179" • Failure [197.928 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with 3 CPU cores /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:56 should report 3 cpu cores under guest OS [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:62 Expected error: : 180000000000 expect: timer expired after 180 seconds not to have occurred /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:79 ------------------------------ STEP: Starting a VirtualMachineInstance level=info timestamp=2018-07-19T13:20:04.203120Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmijc48g-p9xhh" level=info timestamp=2018-07-19T13:20:18.892713Z pos=utils.go:243 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmijc48g-p9xhh" level=info timestamp=2018-07-19T13:20:20.683244Z pos=utils.go:243 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-07-19T13:20:20.699122Z pos=utils.go:243 component=tests msg="VirtualMachineInstance started." STEP: Expecting the VirtualMachineInstance console level=info timestamp=2018-07-19T13:23:20.901563Z pos=utils.go:1249 component=tests namespace=kubevirt-test-default name=testvmijc48g kind=VirtualMachineInstance uid=72463774-8b56-11e8-8a24-525500d15501 msg="Login: [{2 \r\n\r\n\r\nISOLINUX 6.04 6.04-pre1 Copyright (C) 1994-2015 H. Peter Anvin et al\r\nboot: \u001b[?7h\r\n[ 64.481167] INFO: rcu_sched detected expedited stalls on CPUs/tasks: {\r\n[ 64.490331] blocking rcu_node structures:\r\n []}]" • [SLOW TEST:18.718 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-2Mi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ S [SKIPPING] [0.218 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 VirtualMachineInstance definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:55 with hugepages /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:108 should consume hugepages /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 hugepages-1Gi [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 No node with hugepages hugepages-1Gi capacity /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:160 ------------------------------ • ------------------------------ • [SLOW TEST:74.655 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:277 should report defined CPU model /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:278 ------------------------------ • [SLOW TEST:78.272 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 with CPU spec /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:238 when CPU model not defined /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:305 should report CPU model from libvirt capabilities /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:306 ------------------------------ • [SLOW TEST:45.209 seconds] Configurations /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:44 New VirtualMachineInstance with all supported drives /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:326 should have all the device nodes /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:349 ------------------------------ • [SLOW TEST:15.443 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.533 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given an vm /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.730 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi preset /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:14.424 seconds] User Access /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:33 With default kubevirt service accounts /root/go/src/kubevirt.io/kubevirt/tests/access_test.go:41 should verify permissions are correct for view, edit, and admin /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 given a vmi replica set /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:45.365 seconds] Health Monitoring /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:37 A VirtualMachineInstance with a watchdog device /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:56 should be shut down when the watchdog expires /root/go/src/kubevirt.io/kubevirt/tests/vmi_monitoring_test.go:57 ------------------------------ ••••••••••• ------------------------------ • [SLOW TEST:133.149 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting and stopping the same VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:90 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:91 should success multiple times /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:92 ------------------------------ • [SLOW TEST:18.500 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:111 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:112 should not modify the spec on status update /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:113 ------------------------------ • [SLOW TEST:28.383 seconds] RegistryDisk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:41 Starting multiple VMIs /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:129 with ephemeral registry disk /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:130 should success /root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:131 ------------------------------ • [SLOW TEST:6.701 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 with correct permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:51 should be allowed to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:52 ------------------------------ • [SLOW TEST:5.504 seconds] Subresource Api /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:37 Rbac Authorization /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:48 Without permissions /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:56 should not be able to access subresource endpoint /root/go/src/kubevirt.io/kubevirt/tests/subresource_api_test.go:57 ------------------------------ •• ------------------------------ • [SLOW TEST:17.464 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should successfully start with hook sidecar annotation /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:60 ------------------------------ • [SLOW TEST:18.171 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should call Collect and OnDefineDomain on the hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:67 ------------------------------ • [SLOW TEST:19.894 seconds] HookSidecars /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:40 VMI definition /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:58 with SM BIOS hook sidecar /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:59 should update domain XML with SM BIOS properties /root/go/src/kubevirt.io/kubevirt/tests/vmi_hook_sidecar_test.go:83 ------------------------------ •• ------------------------------ • [SLOW TEST:17.423 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should update VirtualMachine once VMIs are up /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:195 ------------------------------ • [SLOW TEST:8.300 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should remove VirtualMachineInstance once the VMI is marked for deletion /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:204 ------------------------------ • ------------------------------ • [SLOW TEST:37.819 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if it gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:245 ------------------------------ • [SLOW TEST:77.780 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should recreate VirtualMachineInstance if the VirtualMachineInstance's pod gets deleted /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:265 ------------------------------ • [SLOW TEST:41.958 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should stop VirtualMachineInstance if running set to false /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:325 ------------------------------ • [SLOW TEST:269.978 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should start and stop VirtualMachineInstance multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:333 ------------------------------ • [SLOW TEST:109.037 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should not update the VirtualMachineInstance spec if Running /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:346 ------------------------------ • [SLOW TEST:340.719 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 should survive guest shutdown, multiple times /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:387 ------------------------------ VM testvmi6p2hz was scheduled to start • [SLOW TEST:18.679 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should start a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:436 ------------------------------ VM testvmikrdb4 was scheduled to stop • [SLOW TEST:41.775 seconds] VirtualMachine /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:47 A valid VirtualMachine given /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:115 Using virtctl interface /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:435 should stop a VirtualMachineInstance once /root/go/src/kubevirt.io/kubevirt/tests/vm_test.go:467 ------------------------------ •• ------------------------------ • [SLOW TEST:17.377 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should start it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:76 ------------------------------ • [SLOW TEST:19.301 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 should attach virt-launcher to it /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:82 ------------------------------ •••• ------------------------------ • [SLOW TEST:36.829 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Alpine as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:28.045 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with boot order /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:170 should be able to boot from selected disk /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 Cirros as first boot /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ • [SLOW TEST:17.601 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should retry starting the VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:203 ------------------------------ • [SLOW TEST:17.951 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with user-data /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:201 without k8s secret /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:202 should log warning and proceed once the secret is there /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:233 ------------------------------ • [SLOW TEST:34.482 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-launcher crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:281 should be stopped and have Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:282 ------------------------------ • [SLOW TEST:28.404 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler crashes /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:304 should recover and continue management /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:305 ------------------------------ • [SLOW TEST:36.689 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:335 should indicate that a node is ready for vmis /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:336 ------------------------------ • [SLOW TEST:79.128 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 when virt-handler is not responsive /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:366 the node controller should react /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:405 ------------------------------ S [SKIPPING] [0.244 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:458 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-default [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:463 ------------------------------ S [SKIPPING] [0.276 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 with non default namespace /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:458 should log libvirt start and stop lifecycle events of the domain /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 kubevirt-test-alternative [It] /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 Skip log query tests for JENKINS ci test environment /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:463 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.206 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:519 should enable emulation in virt-launcher [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:531 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:527 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.202 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:519 should be reflected in domain XML [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:568 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:527 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.285 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Creating a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:70 VirtualMachineInstance Emulation Mode /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:519 should request a TUN device but not KVM [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:612 Software emulation is not enabled on this cluster /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:527 ------------------------------ •••• ------------------------------ • [SLOW TEST:18.491 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance's Pod /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:764 should result in the VirtualMachineInstance moving to a finalized state /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:765 ------------------------------ • [SLOW TEST:36.106 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:796 with an active pod. /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:797 should result in pod being terminated /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:798 ------------------------------ • [SLOW TEST:21.134 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Delete a VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:796 with grace period greater than 0 /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:821 should run graceful shutdown /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:822 ------------------------------ • [SLOW TEST:30.696 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:873 should be in Failed phase /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:874 ------------------------------ • [SLOW TEST:26.843 seconds] VMIlifecycle /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:48 Killed VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:873 should be left alone by virt-handler /root/go/src/kubevirt.io/kubevirt/tests/vmi_lifecycle_test.go:901 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 should succeed to generate a VM JSON file using oc-process command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:150 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 should succeed to create a VM using oc-create command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:156 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 should succeed to launch a VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:161 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] Templates /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:42 Launching VMI from VM Template [BeforeEach] /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:60 with given Fedora Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:193 with given VM JSON from the Template /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:152 with given VM from the VM JSON /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:158 with given VMI from the VM /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:163 should succeed to terminate the VMI using oc-patch command /root/go/src/kubevirt.io/kubevirt/tests/template_test.go:166 Skip test that requires oc binary /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1383 ------------------------------ • [SLOW TEST:125.735 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be able to reach /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 the Inbound VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ •••• ------------------------------ • [SLOW TEST:5.263 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 should be reachable via the propagated IP from a Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table.go:92 on a different node from Pod /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/extensions/table/table_entry.go:46 ------------------------------ ••••• ------------------------------ • [SLOW TEST:33.168 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom interface model /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:379 should expose the right device type to the guest /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:380 ------------------------------ • ------------------------------ • [SLOW TEST:35.828 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:413 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:414 ------------------------------ • [SLOW TEST:35.120 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address in non-conventional format /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:425 should configure custom MAC address /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:426 ------------------------------ •! Panic [1126.411 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with custom MAC address and slirp interface /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:438 should configure custom MAC address [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:439 Test Panicked Get https://127.0.0.1:33266/api/v1/namespaces/kube-system/pods?labelSelector=kubevirt.io: unexpected EOF /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 Full Stack Trace /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 +0x229 kubevirt.io/kubevirt/tests.PanicOnError(0x14196a0, 0xc4205c6000) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:757 +0x4a kubevirt.io/kubevirt/tests.KubevirtFailHandler(0xc420372080, 0x78, 0xc42039ed80, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1579 +0x505 kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc4208d01c0, 0x1426420, 0x1c79e10, 0x0, 0x0, 0x0, 0x0, 0x1c79e10) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:69 +0x1ed kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).ToNot(0xc4208d01c0, 0x1426420, 0x1c79e10, 0x0, 0x0, 0x0, 0xc4208d01c0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:39 +0xae kubevirt.io/kubevirt/tests_test.glob..func18.3(0xc42032fb80, 0x13959d8, 0x1c79e10) /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:94 +0x228 kubevirt.io/kubevirt/tests_test.glob..func18.16.1() /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:446 +0x269 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc4208917a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc4203511d0, 0x1395a08) /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:824 +0x2e0 ------------------------------ STEP: checking eth0 MAC address level=info timestamp=2018-07-19T14:01:15.107903Z pos=utils.go:243 component=tests msg="Created virtual machine pod virt-launcher-testvmizq4v2-ghx2m" level=info timestamp=2018-07-19T14:01:30.042075Z pos=utils.go:243 component=tests msg="Pod owner ship transferred to the node virt-launcher-testvmizq4v2-ghx2m" level=info timestamp=2018-07-19T14:01:31.440541Z pos=utils.go:243 component=tests msg="VirtualMachineInstance defined." level=info timestamp=2018-07-19T14:01:31.490911Z pos=utils.go:243 component=tests msg="VirtualMachineInstance started." level=info timestamp=2018-07-19T14:04:31.653695Z pos=utils.go:1249 component=tests namespace=kubevirt-test-default name=testvmizq4v2 kind=VirtualMachineInstance uid=32ca5fc5-8b5c-11e8-8a24-525500d15501 msg="Login: [{2 \r\n\r\n\r\nISOLINUX 6.04 6.04-pre1 Copyright (C) 1994-2015 H. Peter Anvin et al\r\nboot: \u001b[?7h\r\n []}]" •! Panic [41.122 seconds] Networking /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:48 VirtualMachineInstance with disabled automatic attachment of interfaces /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:451 should not configure any external interfaces [It] /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:452 Test Panicked an error on the server ("") has prevented the request from succeeding (get pods) /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 Full Stack Trace /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 +0x229 kubevirt.io/kubevirt/tests.PanicOnError(0x1418c60, 0xc42031a5a0) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:757 +0x4a kubevirt.io/kubevirt/tests.KubevirtFailHandler(0xc420b2e000, 0x294, 0xc42039f810, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1579 +0x505 kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc42027b880, 0x1426420, 0x1c79e10, 0x0, 0x0, 0x0, 0x0, 0x1c79e10) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:69 +0x1ed kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).ToNot(0xc42027b880, 0x1426420, 0x1c79e10, 0x0, 0x0, 0x0, 0xc42027b880) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:39 +0xae kubevirt.io/kubevirt/tests_test.glob..func18.17.1() /root/go/src/kubevirt.io/kubevirt/tests/vmi_networking_test.go:459 +0x218 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc420891980, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc4203511d0, 0x1395a08) /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:824 +0x2e0 ------------------------------ STEP: checking loopback is the only guest interface •! Panic [101.238 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a cirros image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:66 should return that we are running cirros [It] /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:67 Test Panicked an error on the server ("") has prevented the request from succeeding (get pods) /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 Full Stack Trace /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 +0x229 kubevirt.io/kubevirt/tests.PanicOnError(0x1418c60, 0xc420124900) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:757 +0x4a kubevirt.io/kubevirt/tests.KubevirtFailHandler(0xc420b2e2c0, 0x293, 0xc420461770, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1579 +0x505 kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc4209a4340, 0x1426520, 0x1c79e10, 0x1, 0x0, 0x0, 0x0, 0x1c79e10) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:69 +0x1ed kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).To(0xc4209a4340, 0x1426520, 0x1c79e10, 0x0, 0x0, 0x0, 0xc4209a4340) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:35 +0xae kubevirt.io/kubevirt/tests_test.glob..func2.2(0xc420129b80, 0x1316cb5, 0x16) /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:49 +0x1eb kubevirt.io/kubevirt/tests_test.glob..func2.3.1.1.1() /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:69 +0x98 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc420027c20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc4203511d0, 0x1395a08) /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:824 +0x2e0 ------------------------------ STEP: Creating a new VirtualMachineInstance •! Panic [98.194 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 with a fedora image /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:76 should return that we are running fedora [It] /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:77 Test Panicked an error on the server ("") has prevented the request from succeeding (get pods) /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 Full Stack Trace /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 +0x229 kubevirt.io/kubevirt/tests.PanicOnError(0x1418c60, 0xc4201255f0) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:757 +0x4a kubevirt.io/kubevirt/tests.KubevirtFailHandler(0xc420b2e000, 0x293, 0xc420b5c330, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1579 +0x505 kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc4208d1640, 0x1426520, 0x1c79e10, 0x1, 0x0, 0x0, 0x0, 0x1c79e10) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:69 +0x1ed kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).To(0xc4208d1640, 0x1426520, 0x1c79e10, 0x0, 0x0, 0x0, 0xc4208d1640) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:35 +0xae kubevirt.io/kubevirt/tests_test.glob..func2.2(0xc42032e280, 0x1308b6d, 0xa) /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:49 +0x1eb kubevirt.io/kubevirt/tests_test.glob..func2.3.1.2.1() /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:79 +0x7f kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc420027e00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc4203511d0, 0x1395a08) /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:824 +0x2e0 ------------------------------ STEP: Creating a new VirtualMachineInstance •! Panic [98.202 seconds] Console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:35 A new VirtualMachineInstance /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:64 with a serial console /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:65 should be able to reconnect to console multiple times [It] /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:86 Test Panicked an error on the server ("") has prevented the request from succeeding (get pods) /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 Full Stack Trace /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 +0x229 kubevirt.io/kubevirt/tests.PanicOnError(0x1418c60, 0xc420124b40) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:757 +0x4a kubevirt.io/kubevirt/tests.KubevirtFailHandler(0xc4205f8000, 0x293, 0xc4204c1e00, 0x1, 0x1) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:1579 +0x505 kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc4206990c0, 0x1426520, 0x1c79e10, 0x1, 0x0, 0x0, 0x0, 0x1c79e10) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:69 +0x1ed kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).To(0xc4206990c0, 0x1426520, 0x1c79e10, 0x0, 0x0, 0x0, 0xc4206990c0) /root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:35 +0xae kubevirt.io/kubevirt/tests_test.glob..func2.3.1.3() /root/go/src/kubevirt.io/kubevirt/tests/console_test.go:90 +0x223 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc420027f20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc4203511d0, 0x1395a08) /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:824 +0x2e0 ------------------------------ STEP: Creating a new VirtualMachineInstance Panic [3.005 seconds] [AfterSuite] AfterSuite /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:50 Test Panicked Post https://127.0.0.1:33266/api/v1/namespaces: read tcp 127.0.0.1:54168->127.0.0.1:33266: read: connection reset by peer /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 Full Stack Trace /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 +0x229 kubevirt.io/kubevirt/tests.PanicOnError(0x14196a0, 0xc420c565d0) /root/go/src/kubevirt.io/kubevirt/tests/utils.go:757 +0x4a kubevirt.io/kubevirt/tests.createNamespaces() /root/go/src/kubevirt.io/kubevirt/tests/utils.go:750 +0x174 kubevirt.io/kubevirt/tests.AfterTestSuitCleanup() /root/go/src/kubevirt.io/kubevirt/tests/utils.go:310 +0x22 kubevirt.io/kubevirt/tests_test.glob..func11() /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:51 +0x20 kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc4202ae540, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /root/go/src/kubevirt.io/kubevirt/tests/tests_suite_test.go:43 +0xaa testing.tRunner(0xc4203511d0, 0x1395a08) /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:777 +0xd0 created by testing.(*T).Run /gimme/.gimme/versions/go1.10.3.linux.amd64/src/testing/testing.go:824 +0x2e0 ------------------------------ Summarizing 6 Failures: [Fail] Configurations VirtualMachineInstance definition with 3 CPU cores [It] should report 3 cpu cores under guest OS /root/go/src/kubevirt.io/kubevirt/tests/vmi_configuration_test.go:79 [Panic!] Networking VirtualMachineInstance with custom MAC address and slirp interface [It] should configure custom MAC address /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 [Panic!] Networking VirtualMachineInstance with disabled automatic attachment of interfaces [It] should not configure any external interfaces /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 [Panic!] Console A new VirtualMachineInstance with a serial console with a cirros image [It] should return that we are running cirros /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 [Panic!] Console A new VirtualMachineInstance with a serial console with a fedora image [It] should return that we are running fedora /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 [Panic!] Console A new VirtualMachineInstance with a serial console [It] should be able to reconnect to console multiple times /gimme/.gimme/versions/go1.10.3.linux.amd64/src/runtime/panic.go:502 Ran 125 of 141 Specs in 5319.111 seconds FAIL! -- 119 Passed | 6 Failed | 0 Pending | 16 Skipped --- FAIL: TestTests (5319.12s) FAIL make: *** [functest] Error 1 + make cluster-down ./cluster/down.sh